00:00:00.001 Started by upstream project "autotest-spdk-v24.01-LTS-vs-dpdk-v23.11" build number 968 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3635 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.103 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.104 The recommended git tool is: git 00:00:00.104 using credential 00000000-0000-0000-0000-000000000002 00:00:00.105 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.137 Fetching changes from the remote Git repository 00:00:00.140 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.172 Using shallow fetch with depth 1 00:00:00.172 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.172 > git --version # timeout=10 00:00:00.198 > git --version # 'git version 2.39.2' 00:00:00.198 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.221 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.221 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.664 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.673 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.684 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:04.684 > git config core.sparsecheckout # timeout=10 00:00:04.694 > git read-tree -mu HEAD # timeout=10 00:00:04.707 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:04.723 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:04.723 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:04.800 [Pipeline] Start of Pipeline 00:00:04.810 [Pipeline] library 00:00:04.811 Loading library shm_lib@master 00:00:04.811 Library shm_lib@master is cached. Copying from home. 00:00:04.822 [Pipeline] node 00:00:04.833 Running on VM-host-SM0 in /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:04.835 [Pipeline] { 00:00:04.843 [Pipeline] catchError 00:00:04.845 [Pipeline] { 00:00:04.857 [Pipeline] wrap 00:00:04.865 [Pipeline] { 00:00:04.875 [Pipeline] stage 00:00:04.877 [Pipeline] { (Prologue) 00:00:04.896 [Pipeline] echo 00:00:04.897 Node: VM-host-SM0 00:00:04.903 [Pipeline] cleanWs 00:00:04.916 [WS-CLEANUP] Deleting project workspace... 00:00:04.916 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.923 [WS-CLEANUP] done 00:00:05.130 [Pipeline] setCustomBuildProperty 00:00:05.217 [Pipeline] httpRequest 00:00:05.517 [Pipeline] echo 00:00:05.518 Sorcerer 10.211.164.20 is alive 00:00:05.527 [Pipeline] retry 00:00:05.529 [Pipeline] { 00:00:05.540 [Pipeline] httpRequest 00:00:05.543 HttpMethod: GET 00:00:05.544 URL: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:05.544 Sending request to url: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:05.555 Response Code: HTTP/1.1 200 OK 00:00:05.555 Success: Status code 200 is in the accepted range: 200,404 00:00:05.556 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:08.574 [Pipeline] } 00:00:08.591 [Pipeline] // retry 00:00:08.600 [Pipeline] sh 00:00:08.888 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:08.905 [Pipeline] httpRequest 00:00:09.274 [Pipeline] echo 00:00:09.276 Sorcerer 10.211.164.20 is alive 00:00:09.287 [Pipeline] retry 00:00:09.289 [Pipeline] { 00:00:09.306 [Pipeline] httpRequest 00:00:09.312 HttpMethod: GET 00:00:09.312 URL: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:09.313 Sending request to url: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:09.314 Response Code: HTTP/1.1 200 OK 00:00:09.315 Success: Status code 200 is in the accepted range: 200,404 00:00:09.315 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:29.576 [Pipeline] } 00:00:29.596 [Pipeline] // retry 00:00:29.604 [Pipeline] sh 00:00:29.894 + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:32.444 [Pipeline] sh 00:00:32.727 + git -C spdk log --oneline -n5 00:00:32.727 c13c99a5e test: Various fixes for Fedora40 00:00:32.727 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:00:32.727 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:00:32.727 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:00:32.727 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:00:32.748 [Pipeline] withCredentials 00:00:32.759 > git --version # timeout=10 00:00:32.770 > git --version # 'git version 2.39.2' 00:00:32.786 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:32.789 [Pipeline] { 00:00:32.798 [Pipeline] retry 00:00:32.800 [Pipeline] { 00:00:32.815 [Pipeline] sh 00:00:33.096 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:00:33.367 [Pipeline] } 00:00:33.385 [Pipeline] // retry 00:00:33.391 [Pipeline] } 00:00:33.407 [Pipeline] // withCredentials 00:00:33.417 [Pipeline] httpRequest 00:00:33.850 [Pipeline] echo 00:00:33.852 Sorcerer 10.211.164.20 is alive 00:00:33.862 [Pipeline] retry 00:00:33.864 [Pipeline] { 00:00:33.877 [Pipeline] httpRequest 00:00:33.882 HttpMethod: GET 00:00:33.883 URL: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:33.884 Sending request to url: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:33.891 Response Code: HTTP/1.1 200 OK 00:00:33.891 Success: Status code 200 is in the accepted range: 200,404 00:00:33.892 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:57.535 [Pipeline] } 00:00:57.553 [Pipeline] // retry 00:00:57.560 [Pipeline] sh 00:00:57.842 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:59.231 [Pipeline] sh 00:00:59.513 + git -C dpdk log --oneline -n5 00:00:59.514 eeb0605f11 version: 23.11.0 00:00:59.514 238778122a doc: update release notes for 23.11 00:00:59.514 46aa6b3cfc doc: fix description of RSS features 00:00:59.514 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:00:59.514 7e421ae345 devtools: support skipping forbid rule check 00:00:59.531 [Pipeline] writeFile 00:00:59.546 [Pipeline] sh 00:00:59.829 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:59.841 [Pipeline] sh 00:01:00.124 + cat autorun-spdk.conf 00:01:00.124 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:00.124 SPDK_TEST_NVMF=1 00:01:00.124 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:00.124 SPDK_TEST_USDT=1 00:01:00.124 SPDK_RUN_UBSAN=1 00:01:00.124 SPDK_TEST_NVMF_MDNS=1 00:01:00.124 NET_TYPE=virt 00:01:00.124 SPDK_JSONRPC_GO_CLIENT=1 00:01:00.124 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:00.124 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:00.124 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:00.131 RUN_NIGHTLY=1 00:01:00.133 [Pipeline] } 00:01:00.146 [Pipeline] // stage 00:01:00.159 [Pipeline] stage 00:01:00.161 [Pipeline] { (Run VM) 00:01:00.173 [Pipeline] sh 00:01:00.454 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:00.454 + echo 'Start stage prepare_nvme.sh' 00:01:00.454 Start stage prepare_nvme.sh 00:01:00.454 + [[ -n 2 ]] 00:01:00.454 + disk_prefix=ex2 00:01:00.454 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest ]] 00:01:00.454 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf ]] 00:01:00.454 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf 00:01:00.454 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:00.454 ++ SPDK_TEST_NVMF=1 00:01:00.454 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:00.454 ++ SPDK_TEST_USDT=1 00:01:00.454 ++ SPDK_RUN_UBSAN=1 00:01:00.454 ++ SPDK_TEST_NVMF_MDNS=1 00:01:00.454 ++ NET_TYPE=virt 00:01:00.454 ++ SPDK_JSONRPC_GO_CLIENT=1 00:01:00.454 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:00.454 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:00.454 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:00.454 ++ RUN_NIGHTLY=1 00:01:00.454 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:00.454 + nvme_files=() 00:01:00.454 + declare -A nvme_files 00:01:00.454 + backend_dir=/var/lib/libvirt/images/backends 00:01:00.454 + nvme_files['nvme.img']=5G 00:01:00.454 + nvme_files['nvme-cmb.img']=5G 00:01:00.454 + nvme_files['nvme-multi0.img']=4G 00:01:00.454 + nvme_files['nvme-multi1.img']=4G 00:01:00.455 + nvme_files['nvme-multi2.img']=4G 00:01:00.455 + nvme_files['nvme-openstack.img']=8G 00:01:00.455 + nvme_files['nvme-zns.img']=5G 00:01:00.455 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:00.455 + (( SPDK_TEST_FTL == 1 )) 00:01:00.455 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:00.455 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:00.455 + for nvme in "${!nvme_files[@]}" 00:01:00.455 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:01:00.455 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:00.455 + for nvme in "${!nvme_files[@]}" 00:01:00.455 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:01:00.455 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:00.455 + for nvme in "${!nvme_files[@]}" 00:01:00.455 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:01:00.455 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:00.455 + for nvme in "${!nvme_files[@]}" 00:01:00.455 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:01:00.455 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:00.455 + for nvme in "${!nvme_files[@]}" 00:01:00.455 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:01:00.455 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:00.455 + for nvme in "${!nvme_files[@]}" 00:01:00.455 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:01:00.714 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:00.714 + for nvme in "${!nvme_files[@]}" 00:01:00.714 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:01:00.714 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:00.714 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:01:00.714 + echo 'End stage prepare_nvme.sh' 00:01:00.714 End stage prepare_nvme.sh 00:01:00.724 [Pipeline] sh 00:01:01.001 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:01.002 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex2-nvme.img -b /var/lib/libvirt/images/backends/ex2-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img -H -a -v -f fedora39 00:01:01.002 00:01:01.002 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant 00:01:01.002 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk 00:01:01.002 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:01.002 HELP=0 00:01:01.002 DRY_RUN=0 00:01:01.002 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme.img,/var/lib/libvirt/images/backends/ex2-nvme-multi0.img, 00:01:01.002 NVME_DISKS_TYPE=nvme,nvme, 00:01:01.002 NVME_AUTO_CREATE=0 00:01:01.002 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img, 00:01:01.002 NVME_CMB=,, 00:01:01.002 NVME_PMR=,, 00:01:01.002 NVME_ZNS=,, 00:01:01.002 NVME_MS=,, 00:01:01.002 NVME_FDP=,, 00:01:01.002 SPDK_VAGRANT_DISTRO=fedora39 00:01:01.002 SPDK_VAGRANT_VMCPU=10 00:01:01.002 SPDK_VAGRANT_VMRAM=12288 00:01:01.002 SPDK_VAGRANT_PROVIDER=libvirt 00:01:01.002 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:01.002 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:01.002 SPDK_OPENSTACK_NETWORK=0 00:01:01.002 VAGRANT_PACKAGE_BOX=0 00:01:01.002 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:01.002 FORCE_DISTRO=true 00:01:01.002 VAGRANT_BOX_VERSION= 00:01:01.002 EXTRA_VAGRANTFILES= 00:01:01.002 NIC_MODEL=e1000 00:01:01.002 00:01:01.002 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt' 00:01:01.002 /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:04.287 Bringing machine 'default' up with 'libvirt' provider... 00:01:04.546 ==> default: Creating image (snapshot of base box volume). 00:01:04.805 ==> default: Creating domain with the following settings... 00:01:04.805 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1731774041_f794589f5b7d96a42d08 00:01:04.805 ==> default: -- Domain type: kvm 00:01:04.805 ==> default: -- Cpus: 10 00:01:04.805 ==> default: -- Feature: acpi 00:01:04.805 ==> default: -- Feature: apic 00:01:04.805 ==> default: -- Feature: pae 00:01:04.805 ==> default: -- Memory: 12288M 00:01:04.805 ==> default: -- Memory Backing: hugepages: 00:01:04.805 ==> default: -- Management MAC: 00:01:04.805 ==> default: -- Loader: 00:01:04.805 ==> default: -- Nvram: 00:01:04.805 ==> default: -- Base box: spdk/fedora39 00:01:04.805 ==> default: -- Storage pool: default 00:01:04.805 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1731774041_f794589f5b7d96a42d08.img (20G) 00:01:04.805 ==> default: -- Volume Cache: default 00:01:04.805 ==> default: -- Kernel: 00:01:04.805 ==> default: -- Initrd: 00:01:04.805 ==> default: -- Graphics Type: vnc 00:01:04.805 ==> default: -- Graphics Port: -1 00:01:04.805 ==> default: -- Graphics IP: 127.0.0.1 00:01:04.805 ==> default: -- Graphics Password: Not defined 00:01:04.805 ==> default: -- Video Type: cirrus 00:01:04.805 ==> default: -- Video VRAM: 9216 00:01:04.805 ==> default: -- Sound Type: 00:01:04.805 ==> default: -- Keymap: en-us 00:01:04.805 ==> default: -- TPM Path: 00:01:04.805 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:04.805 ==> default: -- Command line args: 00:01:04.805 ==> default: -> value=-device, 00:01:04.805 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:04.805 ==> default: -> value=-drive, 00:01:04.805 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-0-drive0, 00:01:04.805 ==> default: -> value=-device, 00:01:04.805 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:04.805 ==> default: -> value=-device, 00:01:04.805 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:01:04.805 ==> default: -> value=-drive, 00:01:04.805 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:04.805 ==> default: -> value=-device, 00:01:04.805 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:04.805 ==> default: -> value=-drive, 00:01:04.805 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:04.805 ==> default: -> value=-device, 00:01:04.805 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:04.805 ==> default: -> value=-drive, 00:01:04.805 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:04.805 ==> default: -> value=-device, 00:01:04.805 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:04.805 ==> default: Creating shared folders metadata... 00:01:04.805 ==> default: Starting domain. 00:01:06.743 ==> default: Waiting for domain to get an IP address... 00:01:24.841 ==> default: Waiting for SSH to become available... 00:01:24.841 ==> default: Configuring and enabling network interfaces... 00:01:27.382 default: SSH address: 192.168.121.92:22 00:01:27.382 default: SSH username: vagrant 00:01:27.382 default: SSH auth method: private key 00:01:29.918 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:38.039 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:01:43.308 ==> default: Mounting SSHFS shared folder... 00:01:45.214 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:45.214 ==> default: Checking Mount.. 00:01:46.592 ==> default: Folder Successfully Mounted! 00:01:46.592 ==> default: Running provisioner: file... 00:01:47.160 default: ~/.gitconfig => .gitconfig 00:01:47.728 00:01:47.728 SUCCESS! 00:01:47.728 00:01:47.728 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:47.728 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:47.728 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:47.728 00:01:47.737 [Pipeline] } 00:01:47.756 [Pipeline] // stage 00:01:47.766 [Pipeline] dir 00:01:47.767 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt 00:01:47.769 [Pipeline] { 00:01:47.783 [Pipeline] catchError 00:01:47.785 [Pipeline] { 00:01:47.799 [Pipeline] sh 00:01:48.082 + vagrant ssh-config --host vagrant 00:01:48.083 + sed -ne /^Host/,$p 00:01:48.083 + tee ssh_conf 00:01:50.617 Host vagrant 00:01:50.617 HostName 192.168.121.92 00:01:50.617 User vagrant 00:01:50.617 Port 22 00:01:50.617 UserKnownHostsFile /dev/null 00:01:50.617 StrictHostKeyChecking no 00:01:50.617 PasswordAuthentication no 00:01:50.617 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:50.617 IdentitiesOnly yes 00:01:50.617 LogLevel FATAL 00:01:50.617 ForwardAgent yes 00:01:50.617 ForwardX11 yes 00:01:50.617 00:01:50.632 [Pipeline] withEnv 00:01:50.635 [Pipeline] { 00:01:50.651 [Pipeline] sh 00:01:50.933 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:50.933 source /etc/os-release 00:01:50.933 [[ -e /image.version ]] && img=$(< /image.version) 00:01:50.933 # Minimal, systemd-like check. 00:01:50.933 if [[ -e /.dockerenv ]]; then 00:01:50.933 # Clear garbage from the node's name: 00:01:50.933 # agt-er_autotest_547-896 -> autotest_547-896 00:01:50.933 # $HOSTNAME is the actual container id 00:01:50.933 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:50.933 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:50.933 # We can assume this is a mount from a host where container is running, 00:01:50.933 # so fetch its hostname to easily identify the target swarm worker. 00:01:50.933 container="$(< /etc/hostname) ($agent)" 00:01:50.933 else 00:01:50.933 # Fallback 00:01:50.933 container=$agent 00:01:50.933 fi 00:01:50.933 fi 00:01:50.933 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:50.933 00:01:51.204 [Pipeline] } 00:01:51.223 [Pipeline] // withEnv 00:01:51.233 [Pipeline] setCustomBuildProperty 00:01:51.250 [Pipeline] stage 00:01:51.252 [Pipeline] { (Tests) 00:01:51.270 [Pipeline] sh 00:01:51.555 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:51.828 [Pipeline] sh 00:01:52.110 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:52.384 [Pipeline] timeout 00:01:52.385 Timeout set to expire in 1 hr 0 min 00:01:52.387 [Pipeline] { 00:01:52.402 [Pipeline] sh 00:01:52.682 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:53.278 HEAD is now at c13c99a5e test: Various fixes for Fedora40 00:01:53.290 [Pipeline] sh 00:01:53.570 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:53.844 [Pipeline] sh 00:01:54.127 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:54.403 [Pipeline] sh 00:01:54.685 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo 00:01:54.944 ++ readlink -f spdk_repo 00:01:54.944 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:54.944 + [[ -n /home/vagrant/spdk_repo ]] 00:01:54.944 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:54.944 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:54.944 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:54.944 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:54.944 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:54.944 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:01:54.944 + cd /home/vagrant/spdk_repo 00:01:54.944 + source /etc/os-release 00:01:54.944 ++ NAME='Fedora Linux' 00:01:54.944 ++ VERSION='39 (Cloud Edition)' 00:01:54.944 ++ ID=fedora 00:01:54.944 ++ VERSION_ID=39 00:01:54.944 ++ VERSION_CODENAME= 00:01:54.944 ++ PLATFORM_ID=platform:f39 00:01:54.944 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:54.944 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:54.944 ++ LOGO=fedora-logo-icon 00:01:54.944 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:54.944 ++ HOME_URL=https://fedoraproject.org/ 00:01:54.944 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:54.944 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:54.944 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:54.944 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:54.944 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:54.944 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:54.944 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:54.944 ++ SUPPORT_END=2024-11-12 00:01:54.944 ++ VARIANT='Cloud Edition' 00:01:54.944 ++ VARIANT_ID=cloud 00:01:54.944 + uname -a 00:01:54.944 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:54.944 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:54.944 Hugepages 00:01:54.944 node hugesize free / total 00:01:54.944 node0 1048576kB 0 / 0 00:01:54.944 node0 2048kB 0 / 0 00:01:54.944 00:01:54.944 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:54.944 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:54.944 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:55.204 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:55.204 + rm -f /tmp/spdk-ld-path 00:01:55.204 + source autorun-spdk.conf 00:01:55.204 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:55.204 ++ SPDK_TEST_NVMF=1 00:01:55.204 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:55.204 ++ SPDK_TEST_USDT=1 00:01:55.204 ++ SPDK_RUN_UBSAN=1 00:01:55.204 ++ SPDK_TEST_NVMF_MDNS=1 00:01:55.204 ++ NET_TYPE=virt 00:01:55.204 ++ SPDK_JSONRPC_GO_CLIENT=1 00:01:55.204 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:55.204 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:55.204 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:55.204 ++ RUN_NIGHTLY=1 00:01:55.204 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:55.204 + [[ -n '' ]] 00:01:55.204 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:55.204 + for M in /var/spdk/build-*-manifest.txt 00:01:55.204 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:55.204 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:55.204 + for M in /var/spdk/build-*-manifest.txt 00:01:55.204 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:55.204 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:55.204 + for M in /var/spdk/build-*-manifest.txt 00:01:55.204 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:55.204 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:55.204 ++ uname 00:01:55.204 + [[ Linux == \L\i\n\u\x ]] 00:01:55.204 + sudo dmesg -T 00:01:55.204 + sudo dmesg --clear 00:01:55.204 + dmesg_pid=5965 00:01:55.204 + [[ Fedora Linux == FreeBSD ]] 00:01:55.204 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:55.204 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:55.204 + sudo dmesg -Tw 00:01:55.204 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:55.204 + [[ -x /usr/src/fio-static/fio ]] 00:01:55.204 + export FIO_BIN=/usr/src/fio-static/fio 00:01:55.204 + FIO_BIN=/usr/src/fio-static/fio 00:01:55.204 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:55.204 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:55.204 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:55.204 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:55.204 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:55.204 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:55.204 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:55.204 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:55.204 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:55.204 Test configuration: 00:01:55.204 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:55.204 SPDK_TEST_NVMF=1 00:01:55.204 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:55.204 SPDK_TEST_USDT=1 00:01:55.204 SPDK_RUN_UBSAN=1 00:01:55.204 SPDK_TEST_NVMF_MDNS=1 00:01:55.204 NET_TYPE=virt 00:01:55.204 SPDK_JSONRPC_GO_CLIENT=1 00:01:55.204 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:55.204 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:55.204 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:55.204 RUN_NIGHTLY=1 16:21:32 -- common/autotest_common.sh@1689 -- $ [[ n == y ]] 00:01:55.204 16:21:32 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:55.204 16:21:32 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:55.204 16:21:32 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:55.204 16:21:32 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:55.204 16:21:32 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:55.204 16:21:32 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:55.204 16:21:32 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:55.204 16:21:32 -- paths/export.sh@5 -- $ export PATH 00:01:55.204 16:21:32 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:55.204 16:21:32 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:55.204 16:21:32 -- common/autobuild_common.sh@440 -- $ date +%s 00:01:55.204 16:21:32 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1731774092.XXXXXX 00:01:55.204 16:21:32 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1731774092.9SNZm6 00:01:55.464 16:21:32 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:01:55.464 16:21:32 -- common/autobuild_common.sh@446 -- $ '[' -n v23.11 ']' 00:01:55.464 16:21:32 -- common/autobuild_common.sh@447 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:01:55.465 16:21:32 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:01:55.465 16:21:32 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:55.465 16:21:32 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:55.465 16:21:32 -- common/autobuild_common.sh@456 -- $ get_config_params 00:01:55.465 16:21:32 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:01:55.465 16:21:32 -- common/autotest_common.sh@10 -- $ set +x 00:01:55.465 16:21:32 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang' 00:01:55.465 16:21:32 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:55.465 16:21:32 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:55.465 16:21:32 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:55.465 16:21:32 -- spdk/autobuild.sh@16 -- $ date -u 00:01:55.465 Sat Nov 16 04:21:32 PM UTC 2024 00:01:55.465 16:21:32 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:55.465 LTS-67-gc13c99a5e 00:01:55.465 16:21:32 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:55.465 16:21:32 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:55.465 16:21:32 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:55.465 16:21:32 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:55.465 16:21:32 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:55.465 16:21:32 -- common/autotest_common.sh@10 -- $ set +x 00:01:55.465 ************************************ 00:01:55.465 START TEST ubsan 00:01:55.465 ************************************ 00:01:55.465 using ubsan 00:01:55.465 16:21:32 -- common/autotest_common.sh@1114 -- $ echo 'using ubsan' 00:01:55.465 00:01:55.465 real 0m0.000s 00:01:55.465 user 0m0.000s 00:01:55.465 sys 0m0.000s 00:01:55.465 16:21:32 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:01:55.465 16:21:32 -- common/autotest_common.sh@10 -- $ set +x 00:01:55.465 ************************************ 00:01:55.465 END TEST ubsan 00:01:55.465 ************************************ 00:01:55.465 16:21:32 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:01:55.465 16:21:32 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:55.465 16:21:32 -- common/autobuild_common.sh@432 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:55.465 16:21:32 -- common/autotest_common.sh@1087 -- $ '[' 2 -le 1 ']' 00:01:55.465 16:21:32 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:55.465 16:21:32 -- common/autotest_common.sh@10 -- $ set +x 00:01:55.465 ************************************ 00:01:55.465 START TEST build_native_dpdk 00:01:55.465 ************************************ 00:01:55.465 16:21:32 -- common/autotest_common.sh@1114 -- $ _build_native_dpdk 00:01:55.465 16:21:32 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:55.465 16:21:32 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:55.465 16:21:32 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:55.465 16:21:32 -- common/autobuild_common.sh@51 -- $ local compiler 00:01:55.465 16:21:32 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:55.465 16:21:32 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:55.465 16:21:32 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:55.465 16:21:32 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:55.465 16:21:32 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:55.465 16:21:32 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:55.465 16:21:32 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:55.465 16:21:32 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:55.465 16:21:32 -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:55.465 16:21:32 -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:55.465 16:21:32 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:01:55.465 16:21:32 -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:01:55.465 16:21:32 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:01:55.465 16:21:32 -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:01:55.465 16:21:32 -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:01:55.465 16:21:32 -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:01:55.465 eeb0605f11 version: 23.11.0 00:01:55.465 238778122a doc: update release notes for 23.11 00:01:55.465 46aa6b3cfc doc: fix description of RSS features 00:01:55.465 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:55.465 7e421ae345 devtools: support skipping forbid rule check 00:01:55.465 16:21:32 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:55.465 16:21:32 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:55.465 16:21:32 -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:01:55.465 16:21:32 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:55.465 16:21:32 -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:55.465 16:21:32 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:55.465 16:21:32 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:55.465 16:21:32 -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:55.465 16:21:32 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:55.465 16:21:32 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:55.465 16:21:32 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:55.465 16:21:32 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:55.465 16:21:32 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:55.465 16:21:32 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:55.465 16:21:32 -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:01:55.465 16:21:32 -- common/autobuild_common.sh@168 -- $ uname -s 00:01:55.465 16:21:32 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:55.465 16:21:32 -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:01:55.465 16:21:32 -- scripts/common.sh@372 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:01:55.465 16:21:32 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:01:55.465 16:21:32 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:01:55.465 16:21:32 -- scripts/common.sh@335 -- $ IFS=.-: 00:01:55.465 16:21:32 -- scripts/common.sh@335 -- $ read -ra ver1 00:01:55.465 16:21:32 -- scripts/common.sh@336 -- $ IFS=.-: 00:01:55.465 16:21:32 -- scripts/common.sh@336 -- $ read -ra ver2 00:01:55.465 16:21:32 -- scripts/common.sh@337 -- $ local 'op=<' 00:01:55.465 16:21:32 -- scripts/common.sh@339 -- $ ver1_l=3 00:01:55.465 16:21:32 -- scripts/common.sh@340 -- $ ver2_l=3 00:01:55.465 16:21:32 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:01:55.465 16:21:32 -- scripts/common.sh@343 -- $ case "$op" in 00:01:55.465 16:21:32 -- scripts/common.sh@344 -- $ : 1 00:01:55.465 16:21:32 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:01:55.465 16:21:32 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:55.465 16:21:32 -- scripts/common.sh@364 -- $ decimal 23 00:01:55.465 16:21:32 -- scripts/common.sh@352 -- $ local d=23 00:01:55.465 16:21:32 -- scripts/common.sh@353 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:55.465 16:21:32 -- scripts/common.sh@354 -- $ echo 23 00:01:55.465 16:21:32 -- scripts/common.sh@364 -- $ ver1[v]=23 00:01:55.465 16:21:32 -- scripts/common.sh@365 -- $ decimal 21 00:01:55.465 16:21:32 -- scripts/common.sh@352 -- $ local d=21 00:01:55.465 16:21:32 -- scripts/common.sh@353 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:55.465 16:21:32 -- scripts/common.sh@354 -- $ echo 21 00:01:55.465 16:21:32 -- scripts/common.sh@365 -- $ ver2[v]=21 00:01:55.465 16:21:32 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:01:55.465 16:21:32 -- scripts/common.sh@366 -- $ return 1 00:01:55.465 16:21:32 -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:55.465 patching file config/rte_config.h 00:01:55.465 Hunk #1 succeeded at 60 (offset 1 line). 00:01:55.465 16:21:32 -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0 00:01:55.465 16:21:32 -- scripts/common.sh@372 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:01:55.465 16:21:32 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:01:55.465 16:21:32 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:01:55.465 16:21:32 -- scripts/common.sh@335 -- $ IFS=.-: 00:01:55.465 16:21:32 -- scripts/common.sh@335 -- $ read -ra ver1 00:01:55.465 16:21:32 -- scripts/common.sh@336 -- $ IFS=.-: 00:01:55.465 16:21:32 -- scripts/common.sh@336 -- $ read -ra ver2 00:01:55.465 16:21:32 -- scripts/common.sh@337 -- $ local 'op=<' 00:01:55.465 16:21:32 -- scripts/common.sh@339 -- $ ver1_l=3 00:01:55.465 16:21:32 -- scripts/common.sh@340 -- $ ver2_l=3 00:01:55.465 16:21:32 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:01:55.465 16:21:32 -- scripts/common.sh@343 -- $ case "$op" in 00:01:55.465 16:21:32 -- scripts/common.sh@344 -- $ : 1 00:01:55.465 16:21:32 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:01:55.465 16:21:32 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:55.465 16:21:32 -- scripts/common.sh@364 -- $ decimal 23 00:01:55.466 16:21:32 -- scripts/common.sh@352 -- $ local d=23 00:01:55.466 16:21:32 -- scripts/common.sh@353 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:55.466 16:21:32 -- scripts/common.sh@354 -- $ echo 23 00:01:55.466 16:21:32 -- scripts/common.sh@364 -- $ ver1[v]=23 00:01:55.466 16:21:32 -- scripts/common.sh@365 -- $ decimal 24 00:01:55.466 16:21:32 -- scripts/common.sh@352 -- $ local d=24 00:01:55.466 16:21:32 -- scripts/common.sh@353 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:55.466 16:21:32 -- scripts/common.sh@354 -- $ echo 24 00:01:55.466 16:21:32 -- scripts/common.sh@365 -- $ ver2[v]=24 00:01:55.466 16:21:32 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:01:55.466 16:21:32 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:01:55.466 16:21:32 -- scripts/common.sh@367 -- $ return 0 00:01:55.466 16:21:32 -- common/autobuild_common.sh@177 -- $ patch -p1 00:01:55.466 patching file lib/pcapng/rte_pcapng.c 00:01:55.466 16:21:32 -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:01:55.466 16:21:32 -- common/autobuild_common.sh@181 -- $ uname -s 00:01:55.466 16:21:32 -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:01:55.466 16:21:32 -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:55.466 16:21:32 -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:00.739 The Meson build system 00:02:00.739 Version: 1.5.0 00:02:00.739 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:00.739 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:00.739 Build type: native build 00:02:00.739 Program cat found: YES (/usr/bin/cat) 00:02:00.739 Project name: DPDK 00:02:00.739 Project version: 23.11.0 00:02:00.739 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:00.739 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:00.739 Host machine cpu family: x86_64 00:02:00.739 Host machine cpu: x86_64 00:02:00.739 Message: ## Building in Developer Mode ## 00:02:00.739 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:00.739 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:00.739 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:00.739 Program python3 found: YES (/usr/bin/python3) 00:02:00.739 Program cat found: YES (/usr/bin/cat) 00:02:00.739 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:00.739 Compiler for C supports arguments -march=native: YES 00:02:00.739 Checking for size of "void *" : 8 00:02:00.739 Checking for size of "void *" : 8 (cached) 00:02:00.739 Library m found: YES 00:02:00.739 Library numa found: YES 00:02:00.739 Has header "numaif.h" : YES 00:02:00.739 Library fdt found: NO 00:02:00.739 Library execinfo found: NO 00:02:00.739 Has header "execinfo.h" : YES 00:02:00.739 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:00.739 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:00.739 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:00.739 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:00.739 Run-time dependency openssl found: YES 3.1.1 00:02:00.739 Run-time dependency libpcap found: YES 1.10.4 00:02:00.739 Has header "pcap.h" with dependency libpcap: YES 00:02:00.739 Compiler for C supports arguments -Wcast-qual: YES 00:02:00.739 Compiler for C supports arguments -Wdeprecated: YES 00:02:00.739 Compiler for C supports arguments -Wformat: YES 00:02:00.739 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:00.739 Compiler for C supports arguments -Wformat-security: NO 00:02:00.739 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:00.739 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:00.739 Compiler for C supports arguments -Wnested-externs: YES 00:02:00.739 Compiler for C supports arguments -Wold-style-definition: YES 00:02:00.739 Compiler for C supports arguments -Wpointer-arith: YES 00:02:00.739 Compiler for C supports arguments -Wsign-compare: YES 00:02:00.739 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:00.739 Compiler for C supports arguments -Wundef: YES 00:02:00.739 Compiler for C supports arguments -Wwrite-strings: YES 00:02:00.739 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:00.739 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:00.739 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:00.739 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:00.739 Program objdump found: YES (/usr/bin/objdump) 00:02:00.739 Compiler for C supports arguments -mavx512f: YES 00:02:00.739 Checking if "AVX512 checking" compiles: YES 00:02:00.739 Fetching value of define "__SSE4_2__" : 1 00:02:00.739 Fetching value of define "__AES__" : 1 00:02:00.739 Fetching value of define "__AVX__" : 1 00:02:00.739 Fetching value of define "__AVX2__" : 1 00:02:00.739 Fetching value of define "__AVX512BW__" : (undefined) 00:02:00.739 Fetching value of define "__AVX512CD__" : (undefined) 00:02:00.739 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:00.739 Fetching value of define "__AVX512F__" : (undefined) 00:02:00.739 Fetching value of define "__AVX512VL__" : (undefined) 00:02:00.739 Fetching value of define "__PCLMUL__" : 1 00:02:00.739 Fetching value of define "__RDRND__" : 1 00:02:00.739 Fetching value of define "__RDSEED__" : 1 00:02:00.739 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:00.739 Fetching value of define "__znver1__" : (undefined) 00:02:00.739 Fetching value of define "__znver2__" : (undefined) 00:02:00.739 Fetching value of define "__znver3__" : (undefined) 00:02:00.739 Fetching value of define "__znver4__" : (undefined) 00:02:00.739 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:00.739 Message: lib/log: Defining dependency "log" 00:02:00.739 Message: lib/kvargs: Defining dependency "kvargs" 00:02:00.739 Message: lib/telemetry: Defining dependency "telemetry" 00:02:00.739 Checking for function "getentropy" : NO 00:02:00.739 Message: lib/eal: Defining dependency "eal" 00:02:00.739 Message: lib/ring: Defining dependency "ring" 00:02:00.739 Message: lib/rcu: Defining dependency "rcu" 00:02:00.739 Message: lib/mempool: Defining dependency "mempool" 00:02:00.739 Message: lib/mbuf: Defining dependency "mbuf" 00:02:00.739 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:00.739 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:00.739 Compiler for C supports arguments -mpclmul: YES 00:02:00.739 Compiler for C supports arguments -maes: YES 00:02:00.739 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:00.739 Compiler for C supports arguments -mavx512bw: YES 00:02:00.739 Compiler for C supports arguments -mavx512dq: YES 00:02:00.739 Compiler for C supports arguments -mavx512vl: YES 00:02:00.739 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:00.739 Compiler for C supports arguments -mavx2: YES 00:02:00.739 Compiler for C supports arguments -mavx: YES 00:02:00.739 Message: lib/net: Defining dependency "net" 00:02:00.739 Message: lib/meter: Defining dependency "meter" 00:02:00.739 Message: lib/ethdev: Defining dependency "ethdev" 00:02:00.739 Message: lib/pci: Defining dependency "pci" 00:02:00.739 Message: lib/cmdline: Defining dependency "cmdline" 00:02:00.739 Message: lib/metrics: Defining dependency "metrics" 00:02:00.739 Message: lib/hash: Defining dependency "hash" 00:02:00.739 Message: lib/timer: Defining dependency "timer" 00:02:00.739 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:00.739 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:02:00.739 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:02:00.739 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:02:00.739 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:02:00.739 Message: lib/acl: Defining dependency "acl" 00:02:00.739 Message: lib/bbdev: Defining dependency "bbdev" 00:02:00.739 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:00.739 Run-time dependency libelf found: YES 0.191 00:02:00.739 Message: lib/bpf: Defining dependency "bpf" 00:02:00.739 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:00.739 Message: lib/compressdev: Defining dependency "compressdev" 00:02:00.739 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:00.739 Message: lib/distributor: Defining dependency "distributor" 00:02:00.739 Message: lib/dmadev: Defining dependency "dmadev" 00:02:00.740 Message: lib/efd: Defining dependency "efd" 00:02:00.740 Message: lib/eventdev: Defining dependency "eventdev" 00:02:00.740 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:00.740 Message: lib/gpudev: Defining dependency "gpudev" 00:02:00.740 Message: lib/gro: Defining dependency "gro" 00:02:00.740 Message: lib/gso: Defining dependency "gso" 00:02:00.740 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:00.740 Message: lib/jobstats: Defining dependency "jobstats" 00:02:00.740 Message: lib/latencystats: Defining dependency "latencystats" 00:02:00.740 Message: lib/lpm: Defining dependency "lpm" 00:02:00.740 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:00.740 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:00.740 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:00.740 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:00.740 Message: lib/member: Defining dependency "member" 00:02:00.740 Message: lib/pcapng: Defining dependency "pcapng" 00:02:00.740 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:00.740 Message: lib/power: Defining dependency "power" 00:02:00.740 Message: lib/rawdev: Defining dependency "rawdev" 00:02:00.740 Message: lib/regexdev: Defining dependency "regexdev" 00:02:00.740 Message: lib/mldev: Defining dependency "mldev" 00:02:00.740 Message: lib/rib: Defining dependency "rib" 00:02:00.740 Message: lib/reorder: Defining dependency "reorder" 00:02:00.740 Message: lib/sched: Defining dependency "sched" 00:02:00.740 Message: lib/security: Defining dependency "security" 00:02:00.740 Message: lib/stack: Defining dependency "stack" 00:02:00.740 Has header "linux/userfaultfd.h" : YES 00:02:00.740 Has header "linux/vduse.h" : YES 00:02:00.740 Message: lib/vhost: Defining dependency "vhost" 00:02:00.740 Message: lib/ipsec: Defining dependency "ipsec" 00:02:00.740 Message: lib/pdcp: Defining dependency "pdcp" 00:02:00.740 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:00.740 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:00.740 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:02:00.740 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:00.740 Message: lib/fib: Defining dependency "fib" 00:02:00.740 Message: lib/port: Defining dependency "port" 00:02:00.740 Message: lib/pdump: Defining dependency "pdump" 00:02:00.740 Message: lib/table: Defining dependency "table" 00:02:00.740 Message: lib/pipeline: Defining dependency "pipeline" 00:02:00.740 Message: lib/graph: Defining dependency "graph" 00:02:00.740 Message: lib/node: Defining dependency "node" 00:02:00.740 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:02.642 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:02.642 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:02.642 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:02.642 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:02.642 Compiler for C supports arguments -Wno-unused-value: YES 00:02:02.642 Compiler for C supports arguments -Wno-format: YES 00:02:02.642 Compiler for C supports arguments -Wno-format-security: YES 00:02:02.642 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:02.642 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:02.642 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:02.642 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:02.642 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:02.642 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:02.642 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:02.643 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:02.643 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:02.643 Has header "sys/epoll.h" : YES 00:02:02.643 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:02.643 Configuring doxy-api-html.conf using configuration 00:02:02.643 Configuring doxy-api-man.conf using configuration 00:02:02.643 Program mandb found: YES (/usr/bin/mandb) 00:02:02.643 Program sphinx-build found: NO 00:02:02.643 Configuring rte_build_config.h using configuration 00:02:02.643 Message: 00:02:02.643 ================= 00:02:02.643 Applications Enabled 00:02:02.643 ================= 00:02:02.643 00:02:02.643 apps: 00:02:02.643 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:02.643 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:02.643 test-pmd, test-regex, test-sad, test-security-perf, 00:02:02.643 00:02:02.643 Message: 00:02:02.643 ================= 00:02:02.643 Libraries Enabled 00:02:02.643 ================= 00:02:02.643 00:02:02.643 libs: 00:02:02.643 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:02.643 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:02:02.643 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:02:02.643 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:02:02.643 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:02:02.643 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:02:02.643 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:02:02.643 00:02:02.643 00:02:02.643 Message: 00:02:02.643 =============== 00:02:02.643 Drivers Enabled 00:02:02.643 =============== 00:02:02.643 00:02:02.643 common: 00:02:02.643 00:02:02.643 bus: 00:02:02.643 pci, vdev, 00:02:02.643 mempool: 00:02:02.643 ring, 00:02:02.643 dma: 00:02:02.643 00:02:02.643 net: 00:02:02.643 i40e, 00:02:02.643 raw: 00:02:02.643 00:02:02.643 crypto: 00:02:02.643 00:02:02.643 compress: 00:02:02.643 00:02:02.643 regex: 00:02:02.643 00:02:02.643 ml: 00:02:02.643 00:02:02.643 vdpa: 00:02:02.643 00:02:02.643 event: 00:02:02.643 00:02:02.643 baseband: 00:02:02.643 00:02:02.643 gpu: 00:02:02.643 00:02:02.643 00:02:02.643 Message: 00:02:02.643 ================= 00:02:02.643 Content Skipped 00:02:02.643 ================= 00:02:02.643 00:02:02.643 apps: 00:02:02.643 00:02:02.643 libs: 00:02:02.643 00:02:02.643 drivers: 00:02:02.643 common/cpt: not in enabled drivers build config 00:02:02.643 common/dpaax: not in enabled drivers build config 00:02:02.643 common/iavf: not in enabled drivers build config 00:02:02.643 common/idpf: not in enabled drivers build config 00:02:02.643 common/mvep: not in enabled drivers build config 00:02:02.643 common/octeontx: not in enabled drivers build config 00:02:02.643 bus/auxiliary: not in enabled drivers build config 00:02:02.643 bus/cdx: not in enabled drivers build config 00:02:02.643 bus/dpaa: not in enabled drivers build config 00:02:02.643 bus/fslmc: not in enabled drivers build config 00:02:02.643 bus/ifpga: not in enabled drivers build config 00:02:02.643 bus/platform: not in enabled drivers build config 00:02:02.643 bus/vmbus: not in enabled drivers build config 00:02:02.643 common/cnxk: not in enabled drivers build config 00:02:02.643 common/mlx5: not in enabled drivers build config 00:02:02.643 common/nfp: not in enabled drivers build config 00:02:02.643 common/qat: not in enabled drivers build config 00:02:02.643 common/sfc_efx: not in enabled drivers build config 00:02:02.643 mempool/bucket: not in enabled drivers build config 00:02:02.643 mempool/cnxk: not in enabled drivers build config 00:02:02.643 mempool/dpaa: not in enabled drivers build config 00:02:02.643 mempool/dpaa2: not in enabled drivers build config 00:02:02.643 mempool/octeontx: not in enabled drivers build config 00:02:02.643 mempool/stack: not in enabled drivers build config 00:02:02.643 dma/cnxk: not in enabled drivers build config 00:02:02.643 dma/dpaa: not in enabled drivers build config 00:02:02.643 dma/dpaa2: not in enabled drivers build config 00:02:02.643 dma/hisilicon: not in enabled drivers build config 00:02:02.643 dma/idxd: not in enabled drivers build config 00:02:02.643 dma/ioat: not in enabled drivers build config 00:02:02.643 dma/skeleton: not in enabled drivers build config 00:02:02.643 net/af_packet: not in enabled drivers build config 00:02:02.643 net/af_xdp: not in enabled drivers build config 00:02:02.643 net/ark: not in enabled drivers build config 00:02:02.643 net/atlantic: not in enabled drivers build config 00:02:02.643 net/avp: not in enabled drivers build config 00:02:02.643 net/axgbe: not in enabled drivers build config 00:02:02.643 net/bnx2x: not in enabled drivers build config 00:02:02.643 net/bnxt: not in enabled drivers build config 00:02:02.643 net/bonding: not in enabled drivers build config 00:02:02.643 net/cnxk: not in enabled drivers build config 00:02:02.643 net/cpfl: not in enabled drivers build config 00:02:02.643 net/cxgbe: not in enabled drivers build config 00:02:02.643 net/dpaa: not in enabled drivers build config 00:02:02.643 net/dpaa2: not in enabled drivers build config 00:02:02.643 net/e1000: not in enabled drivers build config 00:02:02.643 net/ena: not in enabled drivers build config 00:02:02.643 net/enetc: not in enabled drivers build config 00:02:02.643 net/enetfec: not in enabled drivers build config 00:02:02.643 net/enic: not in enabled drivers build config 00:02:02.643 net/failsafe: not in enabled drivers build config 00:02:02.643 net/fm10k: not in enabled drivers build config 00:02:02.643 net/gve: not in enabled drivers build config 00:02:02.643 net/hinic: not in enabled drivers build config 00:02:02.643 net/hns3: not in enabled drivers build config 00:02:02.643 net/iavf: not in enabled drivers build config 00:02:02.643 net/ice: not in enabled drivers build config 00:02:02.643 net/idpf: not in enabled drivers build config 00:02:02.643 net/igc: not in enabled drivers build config 00:02:02.643 net/ionic: not in enabled drivers build config 00:02:02.643 net/ipn3ke: not in enabled drivers build config 00:02:02.643 net/ixgbe: not in enabled drivers build config 00:02:02.643 net/mana: not in enabled drivers build config 00:02:02.643 net/memif: not in enabled drivers build config 00:02:02.643 net/mlx4: not in enabled drivers build config 00:02:02.643 net/mlx5: not in enabled drivers build config 00:02:02.643 net/mvneta: not in enabled drivers build config 00:02:02.643 net/mvpp2: not in enabled drivers build config 00:02:02.643 net/netvsc: not in enabled drivers build config 00:02:02.643 net/nfb: not in enabled drivers build config 00:02:02.643 net/nfp: not in enabled drivers build config 00:02:02.643 net/ngbe: not in enabled drivers build config 00:02:02.643 net/null: not in enabled drivers build config 00:02:02.643 net/octeontx: not in enabled drivers build config 00:02:02.643 net/octeon_ep: not in enabled drivers build config 00:02:02.643 net/pcap: not in enabled drivers build config 00:02:02.643 net/pfe: not in enabled drivers build config 00:02:02.643 net/qede: not in enabled drivers build config 00:02:02.643 net/ring: not in enabled drivers build config 00:02:02.643 net/sfc: not in enabled drivers build config 00:02:02.643 net/softnic: not in enabled drivers build config 00:02:02.643 net/tap: not in enabled drivers build config 00:02:02.643 net/thunderx: not in enabled drivers build config 00:02:02.643 net/txgbe: not in enabled drivers build config 00:02:02.643 net/vdev_netvsc: not in enabled drivers build config 00:02:02.643 net/vhost: not in enabled drivers build config 00:02:02.643 net/virtio: not in enabled drivers build config 00:02:02.643 net/vmxnet3: not in enabled drivers build config 00:02:02.643 raw/cnxk_bphy: not in enabled drivers build config 00:02:02.643 raw/cnxk_gpio: not in enabled drivers build config 00:02:02.643 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:02.643 raw/ifpga: not in enabled drivers build config 00:02:02.643 raw/ntb: not in enabled drivers build config 00:02:02.643 raw/skeleton: not in enabled drivers build config 00:02:02.643 crypto/armv8: not in enabled drivers build config 00:02:02.643 crypto/bcmfs: not in enabled drivers build config 00:02:02.643 crypto/caam_jr: not in enabled drivers build config 00:02:02.643 crypto/ccp: not in enabled drivers build config 00:02:02.643 crypto/cnxk: not in enabled drivers build config 00:02:02.643 crypto/dpaa_sec: not in enabled drivers build config 00:02:02.643 crypto/dpaa2_sec: not in enabled drivers build config 00:02:02.643 crypto/ipsec_mb: not in enabled drivers build config 00:02:02.643 crypto/mlx5: not in enabled drivers build config 00:02:02.643 crypto/mvsam: not in enabled drivers build config 00:02:02.643 crypto/nitrox: not in enabled drivers build config 00:02:02.643 crypto/null: not in enabled drivers build config 00:02:02.643 crypto/octeontx: not in enabled drivers build config 00:02:02.643 crypto/openssl: not in enabled drivers build config 00:02:02.643 crypto/scheduler: not in enabled drivers build config 00:02:02.643 crypto/uadk: not in enabled drivers build config 00:02:02.643 crypto/virtio: not in enabled drivers build config 00:02:02.643 compress/isal: not in enabled drivers build config 00:02:02.643 compress/mlx5: not in enabled drivers build config 00:02:02.643 compress/octeontx: not in enabled drivers build config 00:02:02.643 compress/zlib: not in enabled drivers build config 00:02:02.643 regex/mlx5: not in enabled drivers build config 00:02:02.643 regex/cn9k: not in enabled drivers build config 00:02:02.643 ml/cnxk: not in enabled drivers build config 00:02:02.643 vdpa/ifc: not in enabled drivers build config 00:02:02.643 vdpa/mlx5: not in enabled drivers build config 00:02:02.643 vdpa/nfp: not in enabled drivers build config 00:02:02.643 vdpa/sfc: not in enabled drivers build config 00:02:02.643 event/cnxk: not in enabled drivers build config 00:02:02.643 event/dlb2: not in enabled drivers build config 00:02:02.644 event/dpaa: not in enabled drivers build config 00:02:02.644 event/dpaa2: not in enabled drivers build config 00:02:02.644 event/dsw: not in enabled drivers build config 00:02:02.644 event/opdl: not in enabled drivers build config 00:02:02.644 event/skeleton: not in enabled drivers build config 00:02:02.644 event/sw: not in enabled drivers build config 00:02:02.644 event/octeontx: not in enabled drivers build config 00:02:02.644 baseband/acc: not in enabled drivers build config 00:02:02.644 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:02.644 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:02.644 baseband/la12xx: not in enabled drivers build config 00:02:02.644 baseband/null: not in enabled drivers build config 00:02:02.644 baseband/turbo_sw: not in enabled drivers build config 00:02:02.644 gpu/cuda: not in enabled drivers build config 00:02:02.644 00:02:02.644 00:02:02.644 Build targets in project: 220 00:02:02.644 00:02:02.644 DPDK 23.11.0 00:02:02.644 00:02:02.644 User defined options 00:02:02.644 libdir : lib 00:02:02.644 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:02.644 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:02.644 c_link_args : 00:02:02.644 enable_docs : false 00:02:02.644 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:02.644 enable_kmods : false 00:02:02.644 machine : native 00:02:02.644 tests : false 00:02:02.644 00:02:02.644 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:02.644 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:02.644 16:21:39 -- common/autobuild_common.sh@189 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:02.644 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:02.644 [1/710] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:02.644 [2/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:02.644 [3/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:02.644 [4/710] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:02.644 [5/710] Linking static target lib/librte_kvargs.a 00:02:02.902 [6/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:02.902 [7/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:02.902 [8/710] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:02.902 [9/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:02.902 [10/710] Linking static target lib/librte_log.a 00:02:02.902 [11/710] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.161 [12/710] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.161 [13/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:03.161 [14/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:03.161 [15/710] Linking target lib/librte_log.so.24.0 00:02:03.420 [16/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:03.420 [17/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:03.420 [18/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:03.420 [19/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:03.420 [20/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:03.679 [21/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:03.679 [22/710] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:03.679 [23/710] Linking target lib/librte_kvargs.so.24.0 00:02:03.679 [24/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:03.679 [25/710] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:03.937 [26/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:03.937 [27/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:03.937 [28/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:03.937 [29/710] Linking static target lib/librte_telemetry.a 00:02:03.937 [30/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:03.937 [31/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:04.196 [32/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:04.196 [33/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:04.196 [34/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:04.454 [35/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:04.454 [36/710] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.454 [37/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:04.454 [38/710] Linking target lib/librte_telemetry.so.24.0 00:02:04.454 [39/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:04.454 [40/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:04.454 [41/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:04.454 [42/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:04.454 [43/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:04.454 [44/710] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:04.713 [45/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:04.713 [46/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:04.713 [47/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:04.972 [48/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:04.972 [49/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:04.972 [50/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:04.972 [51/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:04.972 [52/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:04.972 [53/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:05.231 [54/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:05.231 [55/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:05.231 [56/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:05.231 [57/710] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:05.490 [58/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:05.490 [59/710] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:05.490 [60/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:05.490 [61/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:05.490 [62/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:05.490 [63/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:05.490 [64/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:05.749 [65/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:05.749 [66/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:05.749 [67/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:05.749 [68/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:06.007 [69/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:06.007 [70/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:06.007 [71/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:06.007 [72/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:06.007 [73/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:06.007 [74/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:06.007 [75/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:06.007 [76/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:06.007 [77/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:06.266 [78/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:06.266 [79/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:06.524 [80/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:06.524 [81/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:06.524 [82/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:06.783 [83/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:06.783 [84/710] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:06.783 [85/710] Linking static target lib/librte_ring.a 00:02:06.783 [86/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:06.783 [87/710] Linking static target lib/librte_eal.a 00:02:06.783 [88/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:06.783 [89/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:07.042 [90/710] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.042 [91/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:07.042 [92/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:07.300 [93/710] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:07.300 [94/710] Linking static target lib/librte_mempool.a 00:02:07.300 [95/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:07.300 [96/710] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:07.300 [97/710] Linking static target lib/librte_rcu.a 00:02:07.558 [98/710] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:07.558 [99/710] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:07.558 [100/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:07.559 [101/710] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.559 [102/710] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:07.817 [103/710] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.817 [104/710] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:07.817 [105/710] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:07.817 [106/710] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:08.075 [107/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:08.075 [108/710] Linking static target lib/librte_mbuf.a 00:02:08.075 [109/710] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:08.075 [110/710] Linking static target lib/librte_net.a 00:02:08.075 [111/710] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:08.075 [112/710] Linking static target lib/librte_meter.a 00:02:08.333 [113/710] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.333 [114/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:08.333 [115/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:08.333 [116/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:08.333 [117/710] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.333 [118/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:08.592 [119/710] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.160 [120/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:09.160 [121/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:09.420 [122/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:09.420 [123/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:09.420 [124/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:09.420 [125/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:09.420 [126/710] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:09.420 [127/710] Linking static target lib/librte_pci.a 00:02:09.420 [128/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:09.420 [129/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:09.679 [130/710] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.679 [131/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:09.679 [132/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:09.679 [133/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:09.679 [134/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:09.679 [135/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:09.679 [136/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:09.679 [137/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:09.679 [138/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:09.679 [139/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:09.938 [140/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:09.938 [141/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:10.197 [142/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:10.197 [143/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:10.197 [144/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:10.197 [145/710] Linking static target lib/librte_cmdline.a 00:02:10.455 [146/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:10.455 [147/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:10.455 [148/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:10.455 [149/710] Linking static target lib/librte_metrics.a 00:02:10.455 [150/710] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:10.715 [151/710] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.973 [152/710] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.973 [153/710] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:10.973 [154/710] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:10.973 [155/710] Linking static target lib/librte_timer.a 00:02:11.540 [156/710] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.540 [157/710] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:11.540 [158/710] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:11.799 [159/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:11.799 [160/710] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:12.058 [161/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:12.058 [162/710] Linking static target lib/librte_ethdev.a 00:02:12.317 [163/710] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:12.317 [164/710] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:12.317 [165/710] Linking static target lib/librte_bitratestats.a 00:02:12.317 [166/710] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.576 [167/710] Linking target lib/librte_eal.so.24.0 00:02:12.576 [168/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:12.576 [169/710] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:12.576 [170/710] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.576 [171/710] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:12.576 [172/710] Linking static target lib/librte_hash.a 00:02:12.576 [173/710] Linking static target lib/librte_bbdev.a 00:02:12.576 [174/710] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:12.576 [175/710] Linking target lib/librte_ring.so.24.0 00:02:12.835 [176/710] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:12.835 [177/710] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:12.835 [178/710] Linking target lib/librte_rcu.so.24.0 00:02:12.835 [179/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:12.835 [180/710] Linking target lib/librte_mempool.so.24.0 00:02:12.835 [181/710] Linking target lib/librte_meter.so.24.0 00:02:12.835 [182/710] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:13.094 [183/710] Linking target lib/librte_pci.so.24.0 00:02:13.094 [184/710] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:13.094 [185/710] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:13.094 [186/710] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:13.094 [187/710] Linking static target lib/acl/libavx2_tmp.a 00:02:13.094 [188/710] Linking target lib/librte_timer.so.24.0 00:02:13.094 [189/710] Linking target lib/librte_mbuf.so.24.0 00:02:13.094 [190/710] Linking static target lib/acl/libavx512_tmp.a 00:02:13.094 [191/710] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.094 [192/710] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:13.094 [193/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:13.094 [194/710] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.094 [195/710] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:13.094 [196/710] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:13.094 [197/710] Linking target lib/librte_net.so.24.0 00:02:13.094 [198/710] Linking target lib/librte_bbdev.so.24.0 00:02:13.353 [199/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:13.353 [200/710] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:13.353 [201/710] Linking target lib/librte_cmdline.so.24.0 00:02:13.353 [202/710] Linking target lib/librte_hash.so.24.0 00:02:13.353 [203/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:13.353 [204/710] Linking static target lib/librte_acl.a 00:02:13.612 [205/710] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:13.612 [206/710] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:13.612 [207/710] Linking static target lib/librte_cfgfile.a 00:02:13.612 [208/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:13.870 [209/710] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.870 [210/710] Linking target lib/librte_acl.so.24.0 00:02:13.870 [211/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:13.870 [212/710] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.870 [213/710] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:02:13.870 [214/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:13.870 [215/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:13.870 [216/710] Linking target lib/librte_cfgfile.so.24.0 00:02:14.129 [217/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:14.129 [218/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:14.388 [219/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:14.388 [220/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:14.388 [221/710] Linking static target lib/librte_bpf.a 00:02:14.647 [222/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:14.647 [223/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:14.647 [224/710] Linking static target lib/librte_compressdev.a 00:02:14.647 [225/710] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.647 [226/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:14.905 [227/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:14.905 [228/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:14.905 [229/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:14.905 [230/710] Linking static target lib/librte_distributor.a 00:02:15.164 [231/710] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.164 [232/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:15.164 [233/710] Linking target lib/librte_compressdev.so.24.0 00:02:15.164 [234/710] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.164 [235/710] Linking target lib/librte_distributor.so.24.0 00:02:15.164 [236/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:15.164 [237/710] Linking static target lib/librte_dmadev.a 00:02:15.424 [238/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:15.424 [239/710] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.696 [240/710] Linking target lib/librte_dmadev.so.24.0 00:02:15.696 [241/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:15.696 [242/710] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:15.973 [243/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:16.232 [244/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:16.232 [245/710] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:16.232 [246/710] Linking static target lib/librte_efd.a 00:02:16.232 [247/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:16.232 [248/710] Linking static target lib/librte_cryptodev.a 00:02:16.232 [249/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:16.491 [250/710] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.491 [251/710] Linking target lib/librte_efd.so.24.0 00:02:16.491 [252/710] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.491 [253/710] Linking target lib/librte_ethdev.so.24.0 00:02:16.750 [254/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:16.750 [255/710] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:16.750 [256/710] Linking target lib/librte_metrics.so.24.0 00:02:16.750 [257/710] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:16.750 [258/710] Linking target lib/librte_bpf.so.24.0 00:02:16.750 [259/710] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:02:16.750 [260/710] Linking target lib/librte_bitratestats.so.24.0 00:02:17.008 [261/710] Linking static target lib/librte_dispatcher.a 00:02:17.008 [262/710] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:02:17.008 [263/710] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:17.008 [264/710] Linking static target lib/librte_gpudev.a 00:02:17.008 [265/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:17.266 [266/710] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:17.266 [267/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:17.266 [268/710] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.266 [269/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:17.266 [270/710] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.525 [271/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:17.525 [272/710] Linking target lib/librte_cryptodev.so.24.0 00:02:17.525 [273/710] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:17.783 [274/710] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:17.783 [275/710] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.783 [276/710] Linking target lib/librte_gpudev.so.24.0 00:02:17.783 [277/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:17.783 [278/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:17.783 [279/710] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:17.783 [280/710] Linking static target lib/librte_eventdev.a 00:02:18.042 [281/710] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:18.042 [282/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:18.042 [283/710] Linking static target lib/librte_gro.a 00:02:18.042 [284/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:18.042 [285/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:18.042 [286/710] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:18.300 [287/710] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.300 [288/710] Linking target lib/librte_gro.so.24.0 00:02:18.300 [289/710] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:18.300 [290/710] Linking static target lib/librte_gso.a 00:02:18.558 [291/710] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.558 [292/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:18.558 [293/710] Linking target lib/librte_gso.so.24.0 00:02:18.558 [294/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:18.558 [295/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:18.558 [296/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:18.816 [297/710] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:18.816 [298/710] Linking static target lib/librte_jobstats.a 00:02:18.816 [299/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:19.074 [300/710] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:19.074 [301/710] Linking static target lib/librte_latencystats.a 00:02:19.074 [302/710] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.074 [303/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:19.074 [304/710] Linking static target lib/librte_ip_frag.a 00:02:19.074 [305/710] Linking target lib/librte_jobstats.so.24.0 00:02:19.074 [306/710] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.074 [307/710] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:19.074 [308/710] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:19.333 [309/710] Linking target lib/librte_latencystats.so.24.0 00:02:19.333 [310/710] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:19.333 [311/710] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.333 [312/710] Linking target lib/librte_ip_frag.so.24.0 00:02:19.333 [313/710] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:19.333 [314/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:19.333 [315/710] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:02:19.333 [316/710] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:19.592 [317/710] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:19.850 [318/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:19.850 [319/710] Linking static target lib/librte_lpm.a 00:02:19.850 [320/710] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.850 [321/710] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:19.850 [322/710] Linking target lib/librte_eventdev.so.24.0 00:02:19.850 [323/710] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:19.850 [324/710] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:02:20.109 [325/710] Linking target lib/librte_dispatcher.so.24.0 00:02:20.109 [326/710] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:20.109 [327/710] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.109 [328/710] Linking target lib/librte_lpm.so.24.0 00:02:20.109 [329/710] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:20.109 [330/710] Linking static target lib/librte_pcapng.a 00:02:20.109 [331/710] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:20.109 [332/710] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:20.109 [333/710] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:20.109 [334/710] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:02:20.368 [335/710] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.368 [336/710] Linking target lib/librte_pcapng.so.24.0 00:02:20.368 [337/710] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:20.368 [338/710] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:02:20.627 [339/710] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:20.627 [340/710] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:20.627 [341/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:20.886 [342/710] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:20.886 [343/710] Linking static target lib/librte_power.a 00:02:20.886 [344/710] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:20.886 [345/710] Linking static target lib/librte_regexdev.a 00:02:20.886 [346/710] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:20.886 [347/710] Linking static target lib/librte_member.a 00:02:20.886 [348/710] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:20.886 [349/710] Linking static target lib/librte_rawdev.a 00:02:20.886 [350/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:21.144 [351/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:21.144 [352/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:21.144 [353/710] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.144 [354/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:21.144 [355/710] Linking static target lib/librte_mldev.a 00:02:21.144 [356/710] Linking target lib/librte_member.so.24.0 00:02:21.402 [357/710] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:21.402 [358/710] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.402 [359/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:21.402 [360/710] Linking target lib/librte_rawdev.so.24.0 00:02:21.402 [361/710] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.402 [362/710] Linking target lib/librte_power.so.24.0 00:02:21.402 [363/710] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.659 [364/710] Linking target lib/librte_regexdev.so.24.0 00:02:21.659 [365/710] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:21.915 [366/710] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:21.915 [367/710] Linking static target lib/librte_reorder.a 00:02:21.915 [368/710] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:21.915 [369/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:21.915 [370/710] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:21.915 [371/710] Linking static target lib/librte_rib.a 00:02:21.915 [372/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:21.915 [373/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:22.173 [374/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:22.173 [375/710] Linking static target lib/librte_stack.a 00:02:22.173 [376/710] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.173 [377/710] Linking target lib/librte_reorder.so.24.0 00:02:22.173 [378/710] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:22.173 [379/710] Linking static target lib/librte_security.a 00:02:22.173 [380/710] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.173 [381/710] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:02:22.173 [382/710] Linking target lib/librte_rib.so.24.0 00:02:22.173 [383/710] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.433 [384/710] Linking target lib/librte_stack.so.24.0 00:02:22.433 [385/710] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.433 [386/710] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:02:22.433 [387/710] Linking target lib/librte_mldev.so.24.0 00:02:22.433 [388/710] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.692 [389/710] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:22.692 [390/710] Linking target lib/librte_security.so.24.0 00:02:22.692 [391/710] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:22.692 [392/710] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:02:22.692 [393/710] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:22.951 [394/710] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:22.951 [395/710] Linking static target lib/librte_sched.a 00:02:23.210 [396/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:23.210 [397/710] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:23.210 [398/710] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.469 [399/710] Linking target lib/librte_sched.so.24.0 00:02:23.469 [400/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:23.469 [401/710] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:02:23.469 [402/710] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:23.728 [403/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:23.728 [404/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:23.987 [405/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:23.987 [406/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:24.246 [407/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:24.246 [408/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:24.246 [409/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:24.246 [410/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:24.504 [411/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:24.504 [412/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:24.504 [413/710] Linking static target lib/librte_ipsec.a 00:02:24.761 [414/710] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.761 [415/710] Linking target lib/librte_ipsec.so.24.0 00:02:24.761 [416/710] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:02:24.761 [417/710] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:24.761 [418/710] Linking static target lib/fib/libtrie_avx512_tmp.a 00:02:24.761 [419/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:24.761 [420/710] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:02:25.020 [421/710] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:02:25.020 [422/710] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:02:25.020 [423/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:25.587 [424/710] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:25.845 [425/710] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:25.845 [426/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:25.845 [427/710] Linking static target lib/librte_pdcp.a 00:02:25.845 [428/710] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:25.845 [429/710] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:25.845 [430/710] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:25.845 [431/710] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:25.845 [432/710] Linking static target lib/librte_fib.a 00:02:26.104 [433/710] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.104 [434/710] Linking target lib/librte_pdcp.so.24.0 00:02:26.104 [435/710] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:26.104 [436/710] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.363 [437/710] Linking target lib/librte_fib.so.24.0 00:02:26.621 [438/710] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:26.622 [439/710] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:26.880 [440/710] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:26.880 [441/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:26.880 [442/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:26.880 [443/710] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:26.880 [444/710] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:27.139 [445/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:27.139 [446/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:27.139 [447/710] Linking static target lib/librte_port.a 00:02:27.397 [448/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:27.397 [449/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:27.656 [450/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:27.656 [451/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:27.656 [452/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:27.656 [453/710] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:27.656 [454/710] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.656 [455/710] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:27.656 [456/710] Linking target lib/librte_port.so.24.0 00:02:27.915 [457/710] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:27.915 [458/710] Linking static target lib/librte_pdump.a 00:02:27.915 [459/710] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:02:28.174 [460/710] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.174 [461/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:28.174 [462/710] Linking target lib/librte_pdump.so.24.0 00:02:28.433 [463/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:28.433 [464/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:28.433 [465/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:28.693 [466/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:28.693 [467/710] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:28.693 [468/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:28.693 [469/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:28.952 [470/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:28.952 [471/710] Linking static target lib/librte_table.a 00:02:28.952 [472/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:29.210 [473/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:29.469 [474/710] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.469 [475/710] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:29.469 [476/710] Linking target lib/librte_table.so.24.0 00:02:29.469 [477/710] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:02:29.728 [478/710] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:29.728 [479/710] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:29.986 [480/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:02:29.986 [481/710] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:30.245 [482/710] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:30.245 [483/710] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:30.503 [484/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:02:30.503 [485/710] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:02:30.503 [486/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:30.763 [487/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:31.022 [488/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:02:31.022 [489/710] Linking static target lib/librte_graph.a 00:02:31.022 [490/710] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:31.022 [491/710] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:31.022 [492/710] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:31.281 [493/710] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:02:31.540 [494/710] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.540 [495/710] Linking target lib/librte_graph.so.24.0 00:02:31.540 [496/710] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:02:31.540 [497/710] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:02:31.799 [498/710] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:31.799 [499/710] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:32.057 [500/710] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:02:32.057 [501/710] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:32.057 [502/710] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:02:32.316 [503/710] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:02:32.316 [504/710] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:32.316 [505/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:32.316 [506/710] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:02:32.575 [507/710] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:32.575 [508/710] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:32.834 [509/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:32.834 [510/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:32.834 [511/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:32.834 [512/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:33.092 [513/710] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:02:33.092 [514/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:33.092 [515/710] Linking static target lib/librte_node.a 00:02:33.351 [516/710] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.351 [517/710] Linking target lib/librte_node.so.24.0 00:02:33.351 [518/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:33.351 [519/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:33.351 [520/710] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:33.351 [521/710] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:33.609 [522/710] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:33.609 [523/710] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:33.609 [524/710] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:33.609 [525/710] Linking static target drivers/librte_bus_vdev.a 00:02:33.609 [526/710] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:33.609 [527/710] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:33.609 [528/710] Linking static target drivers/librte_bus_pci.a 00:02:33.868 [529/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:33.868 [530/710] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:33.868 [531/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:33.868 [532/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:33.868 [533/710] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.868 [534/710] Linking target drivers/librte_bus_vdev.so.24.0 00:02:34.127 [535/710] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:34.127 [536/710] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:34.127 [537/710] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:02:34.127 [538/710] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.127 [539/710] Linking target drivers/librte_bus_pci.so.24.0 00:02:34.127 [540/710] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:34.386 [541/710] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:34.386 [542/710] Linking static target drivers/librte_mempool_ring.a 00:02:34.386 [543/710] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:02:34.386 [544/710] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:34.386 [545/710] Linking target drivers/librte_mempool_ring.so.24.0 00:02:34.386 [546/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:34.953 [547/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:34.953 [548/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:34.953 [549/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:34.953 [550/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:34.953 [551/710] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:35.888 [552/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:35.888 [553/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:35.888 [554/710] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:35.888 [555/710] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:36.146 [556/710] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:36.146 [557/710] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:36.405 [558/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:36.405 [559/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:36.664 [560/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:36.664 [561/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:36.922 [562/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:02:37.180 [563/710] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:02:37.180 [564/710] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:37.439 [565/710] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:02:37.439 [566/710] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:02:37.697 [567/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:37.697 [568/710] Linking static target lib/librte_vhost.a 00:02:37.697 [569/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:37.956 [570/710] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:02:37.956 [571/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:37.956 [572/710] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:02:37.956 [573/710] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:02:37.956 [574/710] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:02:38.215 [575/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:38.474 [576/710] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:02:38.474 [577/710] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:02:38.474 [578/710] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:02:38.732 [579/710] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.732 [580/710] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:02:38.732 [581/710] Linking target lib/librte_vhost.so.24.0 00:02:38.732 [582/710] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:02:38.732 [583/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:38.732 [584/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:38.732 [585/710] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:38.732 [586/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:38.996 [587/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:38.996 [588/710] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:38.996 [589/710] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:39.298 [590/710] Linking static target drivers/librte_net_i40e.a 00:02:39.298 [591/710] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:39.298 [592/710] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:39.298 [593/710] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:39.560 [594/710] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:39.560 [595/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:39.817 [596/710] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.817 [597/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:39.817 [598/710] Linking target drivers/librte_net_i40e.so.24.0 00:02:39.817 [599/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:40.075 [600/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:40.333 [601/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:40.333 [602/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:40.333 [603/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:40.592 [604/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:40.592 [605/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:40.592 [606/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:40.592 [607/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:41.160 [608/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:41.160 [609/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:41.160 [610/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:41.160 [611/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:41.160 [612/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:41.160 [613/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:41.418 [614/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:41.418 [615/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:02:41.418 [616/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:41.418 [617/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:41.677 [618/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:41.934 [619/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:41.934 [620/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:02:42.192 [621/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:42.192 [622/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:42.192 [623/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:42.192 [624/710] Linking static target lib/librte_pipeline.a 00:02:42.192 [625/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:42.758 [626/710] Linking target app/dpdk-dumpcap 00:02:43.016 [627/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:43.016 [628/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:43.016 [629/710] Linking target app/dpdk-graph 00:02:43.016 [630/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:43.274 [631/710] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:43.274 [632/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:43.274 [633/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:43.274 [634/710] Linking target app/dpdk-pdump 00:02:43.533 [635/710] Linking target app/dpdk-proc-info 00:02:43.533 [636/710] Linking target app/dpdk-test-acl 00:02:43.533 [637/710] Linking target app/dpdk-test-cmdline 00:02:43.791 [638/710] Linking target app/dpdk-test-compress-perf 00:02:43.791 [639/710] Linking target app/dpdk-test-dma-perf 00:02:43.791 [640/710] Linking target app/dpdk-test-crypto-perf 00:02:43.791 [641/710] Linking target app/dpdk-test-fib 00:02:44.050 [642/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:44.308 [643/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:44.308 [644/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:02:44.308 [645/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:44.308 [646/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:02:44.308 [647/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:02:44.308 [648/710] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:44.308 [649/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:02:44.566 [650/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:44.566 [651/710] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.824 [652/710] Linking target lib/librte_pipeline.so.24.0 00:02:44.824 [653/710] Linking target app/dpdk-test-gpudev 00:02:44.824 [654/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:02:44.824 [655/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:02:44.824 [656/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:02:44.824 [657/710] Linking target app/dpdk-test-eventdev 00:02:45.082 [658/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:45.082 [659/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:02:45.340 [660/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:02:45.340 [661/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:02:45.340 [662/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:02:45.340 [663/710] Linking target app/dpdk-test-flow-perf 00:02:45.599 [664/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:45.599 [665/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:45.599 [666/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:45.599 [667/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:45.857 [668/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:45.857 [669/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:45.857 [670/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:45.857 [671/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:46.116 [672/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:46.116 [673/710] Linking target app/dpdk-test-bbdev 00:02:46.374 [674/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:02:46.374 [675/710] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:46.374 [676/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:02:46.374 [677/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:46.632 [678/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:46.891 [679/710] Linking target app/dpdk-test-mldev 00:02:46.891 [680/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:46.891 [681/710] Linking target app/dpdk-test-pipeline 00:02:46.891 [682/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:47.150 [683/710] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:47.408 [684/710] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:47.408 [685/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:47.408 [686/710] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:47.666 [687/710] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:47.666 [688/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:47.925 [689/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:47.925 [690/710] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:48.183 [691/710] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:48.183 [692/710] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:02:48.183 [693/710] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:48.442 [694/710] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:48.701 [695/710] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:48.701 [696/710] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:49.268 [697/710] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:49.268 [698/710] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:49.268 [699/710] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:49.268 [700/710] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:49.268 [701/710] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:49.268 [702/710] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:49.527 [703/710] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:49.527 [704/710] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:49.527 [705/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:49.527 [706/710] Linking target app/dpdk-test-regex 00:02:49.786 [707/710] Linking target app/dpdk-test-sad 00:02:49.786 [708/710] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:50.044 [709/710] Linking target app/dpdk-testpmd 00:02:50.302 [710/710] Linking target app/dpdk-test-security-perf 00:02:50.302 16:22:27 -- common/autobuild_common.sh@190 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:02:50.302 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:50.302 [0/1] Installing files. 00:02:50.563 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:02:50.563 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:50.563 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:50.563 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:50.563 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:50.563 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:50.563 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:50.563 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:50.563 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:50.563 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:50.563 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:50.563 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:50.563 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:50.563 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:50.563 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:50.563 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:50.563 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:50.563 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:02:50.563 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:02:50.563 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:02:50.563 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:02:50.563 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:50.563 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:50.563 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:50.563 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:50.563 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:02:50.563 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:50.563 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:50.563 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:50.563 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:50.563 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:50.563 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:50.563 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:50.563 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:50.563 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:50.563 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:50.563 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:50.563 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:50.563 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.563 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.563 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.563 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.563 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.563 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.563 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.563 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.563 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.563 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.563 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.563 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.563 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.563 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.563 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.563 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.563 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:50.563 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:50.563 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:50.564 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:50.564 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:50.564 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:50.564 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:50.564 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.564 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.564 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.564 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.564 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.564 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.564 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.564 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.564 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.564 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.564 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.564 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.564 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.564 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.564 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.564 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.564 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.564 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.564 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.564 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.564 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.564 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.564 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.564 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.564 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.564 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.564 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.564 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:50.564 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:50.564 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:50.564 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:50.564 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:50.564 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:50.564 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:50.564 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:50.564 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:50.564 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:50.564 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.564 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.564 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.564 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.564 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.564 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.564 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.564 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.564 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.564 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.564 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.564 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.564 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.564 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.564 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.826 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.827 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.828 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:50.829 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:50.829 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.829 Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.829 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:50.830 Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.090 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.090 Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.090 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.090 Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:02:51.090 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.090 Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:02:51.090 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.090 Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:02:51.090 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.090 Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:02:51.090 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:51.090 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:51.090 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:51.090 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:51.090 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:51.090 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:51.090 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:51.090 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:51.090 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:51.090 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:51.090 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:51.090 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:51.090 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:51.090 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:51.090 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:51.090 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:51.090 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:51.090 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:51.090 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:51.090 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:51.090 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.090 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.090 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.090 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:51.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:51.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:51.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:51.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:51.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:51.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:51.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:51.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:51.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:51.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:51.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:51.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.090 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.091 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.352 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.353 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.354 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.355 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.355 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.355 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.355 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.355 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.355 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.355 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.355 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.355 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.355 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.355 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.355 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.355 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.355 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.355 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.355 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.355 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.355 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:51.355 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:51.355 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:51.355 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:51.355 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:51.355 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:51.355 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:51.355 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:51.355 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:51.355 Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:02:51.355 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:02:51.355 Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:02:51.355 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:02:51.355 Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:02:51.355 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:02:51.355 Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:02:51.355 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:02:51.355 Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:02:51.355 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:02:51.355 Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:02:51.355 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:02:51.355 Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:02:51.355 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:02:51.355 Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:02:51.355 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:02:51.355 Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:02:51.355 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:02:51.355 Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:02:51.355 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:02:51.355 Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:02:51.355 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:02:51.355 Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:02:51.355 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:02:51.355 Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:02:51.355 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:02:51.355 Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:02:51.355 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:02:51.355 Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:02:51.355 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:02:51.355 Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:02:51.355 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:02:51.355 Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:02:51.355 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:02:51.355 Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:02:51.355 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:02:51.355 Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:02:51.355 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:02:51.355 Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:02:51.355 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:02:51.355 Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:02:51.355 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:02:51.355 Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:02:51.355 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:02:51.355 Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:02:51.355 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:02:51.355 Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:02:51.355 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:02:51.355 Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:02:51.355 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:02:51.355 Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:02:51.355 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:02:51.355 Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:02:51.355 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:02:51.355 Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:02:51.355 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:02:51.355 Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:02:51.355 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:02:51.355 Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:02:51.355 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:02:51.355 Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:02:51.355 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:02:51.355 Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:02:51.355 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:02:51.355 Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:02:51.355 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:02:51.355 Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:02:51.355 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:02:51.355 Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:02:51.355 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:02:51.355 Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:02:51.355 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:02:51.355 Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:02:51.355 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:02:51.355 Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:02:51.355 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:02:51.355 Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:02:51.356 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:02:51.356 Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:02:51.356 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:02:51.356 Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:02:51.356 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:02:51.356 Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:02:51.356 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:02:51.356 Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:02:51.356 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:02:51.356 Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:02:51.356 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:02:51.356 Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:02:51.356 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:02:51.356 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:02:51.356 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:02:51.356 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:02:51.356 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:02:51.356 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:02:51.356 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:02:51.356 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:02:51.356 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:02:51.356 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:02:51.356 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:02:51.356 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:02:51.356 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:02:51.356 Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:02:51.356 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:02:51.356 Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:02:51.356 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:02:51.356 Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:02:51.356 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:02:51.356 Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:02:51.356 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:02:51.356 Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:02:51.356 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:02:51.356 Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:02:51.356 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:02:51.356 Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:02:51.356 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:02:51.356 Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:02:51.356 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:02:51.356 Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:02:51.356 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:02:51.356 Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:02:51.356 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:02:51.356 Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:02:51.356 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:02:51.356 Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:02:51.356 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:02:51.356 Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:02:51.356 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:02:51.356 Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:02:51.356 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:02:51.356 Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:02:51.356 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:02:51.356 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:02:51.356 16:22:28 -- common/autobuild_common.sh@192 -- $ uname -s 00:02:51.356 16:22:28 -- common/autobuild_common.sh@192 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:51.356 16:22:28 -- common/autobuild_common.sh@203 -- $ cat 00:02:51.356 ************************************ 00:02:51.356 END TEST build_native_dpdk 00:02:51.356 ************************************ 00:02:51.356 16:22:28 -- common/autobuild_common.sh@208 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:51.356 00:02:51.356 real 0m55.904s 00:02:51.356 user 6m39.123s 00:02:51.356 sys 1m7.802s 00:02:51.356 16:22:28 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:51.356 16:22:28 -- common/autotest_common.sh@10 -- $ set +x 00:02:51.356 16:22:28 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:51.356 16:22:28 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:51.356 16:22:28 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:51.356 16:22:28 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:51.356 16:22:28 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:51.356 16:22:28 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:51.356 16:22:28 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:51.356 16:22:28 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang --with-shared 00:02:51.615 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:02:51.615 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:02:51.615 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:02:51.615 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:52.194 Using 'verbs' RDMA provider 00:03:07.642 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:03:19.857 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:03:19.857 go version go1.21.1 linux/amd64 00:03:20.423 Creating mk/config.mk...done. 00:03:20.423 Creating mk/cc.flags.mk...done. 00:03:20.423 Type 'make' to build. 00:03:20.423 16:22:57 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:03:20.423 16:22:57 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:03:20.423 16:22:57 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:03:20.423 16:22:57 -- common/autotest_common.sh@10 -- $ set +x 00:03:20.423 ************************************ 00:03:20.423 START TEST make 00:03:20.423 ************************************ 00:03:20.423 16:22:57 -- common/autotest_common.sh@1114 -- $ make -j10 00:03:20.682 make[1]: Nothing to be done for 'all'. 00:03:42.668 CC lib/log/log.o 00:03:42.668 CC lib/log/log_flags.o 00:03:42.668 CC lib/log/log_deprecated.o 00:03:42.668 CC lib/ut/ut.o 00:03:42.668 CC lib/ut_mock/mock.o 00:03:42.668 LIB libspdk_ut.a 00:03:42.668 LIB libspdk_ut_mock.a 00:03:42.668 SO libspdk_ut.so.1.0 00:03:42.668 SO libspdk_ut_mock.so.5.0 00:03:42.668 LIB libspdk_log.a 00:03:42.668 SYMLINK libspdk_ut.so 00:03:42.668 SYMLINK libspdk_ut_mock.so 00:03:42.668 SO libspdk_log.so.6.1 00:03:42.668 SYMLINK libspdk_log.so 00:03:42.668 CC lib/dma/dma.o 00:03:42.668 CXX lib/trace_parser/trace.o 00:03:42.668 CC lib/util/bit_array.o 00:03:42.668 CC lib/util/base64.o 00:03:42.668 CC lib/util/crc32.o 00:03:42.668 CC lib/util/crc16.o 00:03:42.668 CC lib/util/cpuset.o 00:03:42.668 CC lib/util/crc32c.o 00:03:42.668 CC lib/ioat/ioat.o 00:03:42.668 CC lib/vfio_user/host/vfio_user_pci.o 00:03:42.668 CC lib/util/crc32_ieee.o 00:03:42.668 CC lib/vfio_user/host/vfio_user.o 00:03:42.668 CC lib/util/crc64.o 00:03:42.668 CC lib/util/dif.o 00:03:42.668 LIB libspdk_dma.a 00:03:42.668 CC lib/util/fd.o 00:03:42.668 SO libspdk_dma.so.3.0 00:03:42.668 CC lib/util/file.o 00:03:42.668 CC lib/util/hexlify.o 00:03:42.668 CC lib/util/iov.o 00:03:42.668 SYMLINK libspdk_dma.so 00:03:42.668 CC lib/util/math.o 00:03:42.668 CC lib/util/pipe.o 00:03:42.668 LIB libspdk_ioat.a 00:03:42.668 SO libspdk_ioat.so.6.0 00:03:42.668 LIB libspdk_vfio_user.a 00:03:42.668 CC lib/util/strerror_tls.o 00:03:42.668 SYMLINK libspdk_ioat.so 00:03:42.668 CC lib/util/string.o 00:03:42.668 CC lib/util/uuid.o 00:03:42.668 SO libspdk_vfio_user.so.4.0 00:03:42.668 CC lib/util/fd_group.o 00:03:42.668 CC lib/util/xor.o 00:03:42.668 SYMLINK libspdk_vfio_user.so 00:03:42.668 CC lib/util/zipf.o 00:03:42.668 LIB libspdk_util.a 00:03:42.927 SO libspdk_util.so.8.0 00:03:42.927 SYMLINK libspdk_util.so 00:03:42.927 LIB libspdk_trace_parser.a 00:03:43.186 SO libspdk_trace_parser.so.4.0 00:03:43.186 CC lib/vmd/vmd.o 00:03:43.186 CC lib/env_dpdk/env.o 00:03:43.186 CC lib/conf/conf.o 00:03:43.186 CC lib/env_dpdk/memory.o 00:03:43.186 CC lib/vmd/led.o 00:03:43.186 CC lib/idxd/idxd.o 00:03:43.186 CC lib/rdma/common.o 00:03:43.186 CC lib/env_dpdk/pci.o 00:03:43.186 CC lib/json/json_parse.o 00:03:43.186 SYMLINK libspdk_trace_parser.so 00:03:43.186 CC lib/json/json_util.o 00:03:43.186 CC lib/rdma/rdma_verbs.o 00:03:43.186 LIB libspdk_conf.a 00:03:43.186 CC lib/json/json_write.o 00:03:43.445 SO libspdk_conf.so.5.0 00:03:43.446 CC lib/env_dpdk/init.o 00:03:43.446 CC lib/env_dpdk/threads.o 00:03:43.446 SYMLINK libspdk_conf.so 00:03:43.446 CC lib/env_dpdk/pci_ioat.o 00:03:43.446 CC lib/env_dpdk/pci_virtio.o 00:03:43.446 LIB libspdk_rdma.a 00:03:43.446 SO libspdk_rdma.so.5.0 00:03:43.446 CC lib/env_dpdk/pci_vmd.o 00:03:43.446 CC lib/env_dpdk/pci_idxd.o 00:03:43.446 SYMLINK libspdk_rdma.so 00:03:43.446 CC lib/env_dpdk/pci_event.o 00:03:43.446 CC lib/idxd/idxd_user.o 00:03:43.446 LIB libspdk_json.a 00:03:43.704 CC lib/idxd/idxd_kernel.o 00:03:43.704 SO libspdk_json.so.5.1 00:03:43.704 CC lib/env_dpdk/sigbus_handler.o 00:03:43.704 CC lib/env_dpdk/pci_dpdk.o 00:03:43.704 SYMLINK libspdk_json.so 00:03:43.704 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:43.705 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:43.705 LIB libspdk_vmd.a 00:03:43.705 SO libspdk_vmd.so.5.0 00:03:43.705 LIB libspdk_idxd.a 00:03:43.705 SYMLINK libspdk_vmd.so 00:03:43.705 SO libspdk_idxd.so.11.0 00:03:43.963 SYMLINK libspdk_idxd.so 00:03:43.963 CC lib/jsonrpc/jsonrpc_server.o 00:03:43.963 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:43.963 CC lib/jsonrpc/jsonrpc_client.o 00:03:43.963 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:44.222 LIB libspdk_jsonrpc.a 00:03:44.222 SO libspdk_jsonrpc.so.5.1 00:03:44.222 SYMLINK libspdk_jsonrpc.so 00:03:44.481 LIB libspdk_env_dpdk.a 00:03:44.482 CC lib/rpc/rpc.o 00:03:44.482 SO libspdk_env_dpdk.so.13.0 00:03:44.482 SYMLINK libspdk_env_dpdk.so 00:03:44.482 LIB libspdk_rpc.a 00:03:44.740 SO libspdk_rpc.so.5.0 00:03:44.740 SYMLINK libspdk_rpc.so 00:03:44.740 CC lib/trace/trace_flags.o 00:03:44.740 CC lib/trace/trace.o 00:03:44.740 CC lib/sock/sock.o 00:03:44.740 CC lib/notify/notify.o 00:03:44.740 CC lib/sock/sock_rpc.o 00:03:44.740 CC lib/notify/notify_rpc.o 00:03:44.740 CC lib/trace/trace_rpc.o 00:03:44.998 LIB libspdk_notify.a 00:03:44.998 SO libspdk_notify.so.5.0 00:03:44.998 LIB libspdk_trace.a 00:03:44.998 SYMLINK libspdk_notify.so 00:03:44.998 SO libspdk_trace.so.9.0 00:03:45.257 LIB libspdk_sock.a 00:03:45.257 SYMLINK libspdk_trace.so 00:03:45.257 SO libspdk_sock.so.8.0 00:03:45.257 SYMLINK libspdk_sock.so 00:03:45.257 CC lib/thread/thread.o 00:03:45.257 CC lib/thread/iobuf.o 00:03:45.516 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:45.516 CC lib/nvme/nvme_ctrlr.o 00:03:45.516 CC lib/nvme/nvme_fabric.o 00:03:45.516 CC lib/nvme/nvme_ns_cmd.o 00:03:45.516 CC lib/nvme/nvme_ns.o 00:03:45.516 CC lib/nvme/nvme_pcie_common.o 00:03:45.516 CC lib/nvme/nvme_qpair.o 00:03:45.516 CC lib/nvme/nvme_pcie.o 00:03:45.775 CC lib/nvme/nvme.o 00:03:46.034 CC lib/nvme/nvme_quirks.o 00:03:46.034 CC lib/nvme/nvme_transport.o 00:03:46.034 CC lib/nvme/nvme_discovery.o 00:03:46.293 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:46.293 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:46.293 CC lib/nvme/nvme_tcp.o 00:03:46.551 CC lib/nvme/nvme_opal.o 00:03:46.551 CC lib/nvme/nvme_io_msg.o 00:03:46.552 CC lib/nvme/nvme_poll_group.o 00:03:46.552 LIB libspdk_thread.a 00:03:46.552 SO libspdk_thread.so.9.0 00:03:46.810 CC lib/nvme/nvme_zns.o 00:03:46.810 SYMLINK libspdk_thread.so 00:03:46.810 CC lib/nvme/nvme_cuse.o 00:03:46.810 CC lib/nvme/nvme_vfio_user.o 00:03:46.810 CC lib/nvme/nvme_rdma.o 00:03:47.069 CC lib/accel/accel.o 00:03:47.069 CC lib/blob/blobstore.o 00:03:47.069 CC lib/blob/request.o 00:03:47.328 CC lib/blob/zeroes.o 00:03:47.328 CC lib/blob/blob_bs_dev.o 00:03:47.328 CC lib/accel/accel_rpc.o 00:03:47.328 CC lib/accel/accel_sw.o 00:03:47.680 CC lib/init/json_config.o 00:03:47.680 CC lib/init/subsystem.o 00:03:47.680 CC lib/init/subsystem_rpc.o 00:03:47.680 CC lib/virtio/virtio.o 00:03:47.680 CC lib/init/rpc.o 00:03:47.680 CC lib/virtio/virtio_vhost_user.o 00:03:47.680 CC lib/virtio/virtio_vfio_user.o 00:03:47.680 CC lib/virtio/virtio_pci.o 00:03:47.680 LIB libspdk_init.a 00:03:47.680 SO libspdk_init.so.4.0 00:03:47.680 SYMLINK libspdk_init.so 00:03:47.939 LIB libspdk_accel.a 00:03:47.939 LIB libspdk_virtio.a 00:03:47.939 SO libspdk_accel.so.14.0 00:03:47.939 CC lib/event/reactor.o 00:03:47.939 CC lib/event/log_rpc.o 00:03:47.939 CC lib/event/app.o 00:03:47.939 CC lib/event/app_rpc.o 00:03:47.939 CC lib/event/scheduler_static.o 00:03:47.939 SO libspdk_virtio.so.6.0 00:03:47.939 SYMLINK libspdk_accel.so 00:03:47.939 SYMLINK libspdk_virtio.so 00:03:48.197 LIB libspdk_nvme.a 00:03:48.197 CC lib/bdev/bdev.o 00:03:48.197 CC lib/bdev/bdev_rpc.o 00:03:48.197 CC lib/bdev/bdev_zone.o 00:03:48.197 CC lib/bdev/part.o 00:03:48.197 CC lib/bdev/scsi_nvme.o 00:03:48.197 SO libspdk_nvme.so.12.0 00:03:48.455 LIB libspdk_event.a 00:03:48.455 SO libspdk_event.so.12.0 00:03:48.455 SYMLINK libspdk_event.so 00:03:48.455 SYMLINK libspdk_nvme.so 00:03:49.390 LIB libspdk_blob.a 00:03:49.390 SO libspdk_blob.so.10.1 00:03:49.390 SYMLINK libspdk_blob.so 00:03:49.648 CC lib/lvol/lvol.o 00:03:49.648 CC lib/blobfs/blobfs.o 00:03:49.648 CC lib/blobfs/tree.o 00:03:50.584 LIB libspdk_bdev.a 00:03:50.584 LIB libspdk_lvol.a 00:03:50.584 SO libspdk_bdev.so.14.0 00:03:50.584 SO libspdk_lvol.so.9.1 00:03:50.584 SYMLINK libspdk_lvol.so 00:03:50.584 LIB libspdk_blobfs.a 00:03:50.584 SYMLINK libspdk_bdev.so 00:03:50.584 SO libspdk_blobfs.so.9.0 00:03:50.584 SYMLINK libspdk_blobfs.so 00:03:50.584 CC lib/scsi/lun.o 00:03:50.584 CC lib/scsi/dev.o 00:03:50.584 CC lib/scsi/port.o 00:03:50.584 CC lib/scsi/scsi.o 00:03:50.584 CC lib/scsi/scsi_bdev.o 00:03:50.584 CC lib/ublk/ublk.o 00:03:50.584 CC lib/ublk/ublk_rpc.o 00:03:50.584 CC lib/nbd/nbd.o 00:03:50.584 CC lib/ftl/ftl_core.o 00:03:50.584 CC lib/nvmf/ctrlr.o 00:03:50.843 CC lib/scsi/scsi_pr.o 00:03:50.843 CC lib/scsi/scsi_rpc.o 00:03:50.843 CC lib/scsi/task.o 00:03:50.843 CC lib/ftl/ftl_init.o 00:03:50.843 CC lib/ftl/ftl_layout.o 00:03:50.843 CC lib/ftl/ftl_debug.o 00:03:51.101 CC lib/ftl/ftl_io.o 00:03:51.101 CC lib/nvmf/ctrlr_discovery.o 00:03:51.101 CC lib/nbd/nbd_rpc.o 00:03:51.101 CC lib/nvmf/ctrlr_bdev.o 00:03:51.101 CC lib/nvmf/subsystem.o 00:03:51.101 LIB libspdk_scsi.a 00:03:51.101 LIB libspdk_ublk.a 00:03:51.101 CC lib/ftl/ftl_sb.o 00:03:51.101 SO libspdk_scsi.so.8.0 00:03:51.101 CC lib/nvmf/nvmf.o 00:03:51.361 LIB libspdk_nbd.a 00:03:51.361 SO libspdk_ublk.so.2.0 00:03:51.361 SO libspdk_nbd.so.6.0 00:03:51.361 CC lib/nvmf/nvmf_rpc.o 00:03:51.361 SYMLINK libspdk_scsi.so 00:03:51.361 CC lib/nvmf/transport.o 00:03:51.361 SYMLINK libspdk_ublk.so 00:03:51.361 CC lib/nvmf/tcp.o 00:03:51.361 SYMLINK libspdk_nbd.so 00:03:51.361 CC lib/nvmf/rdma.o 00:03:51.361 CC lib/ftl/ftl_l2p.o 00:03:51.361 CC lib/ftl/ftl_l2p_flat.o 00:03:51.620 CC lib/ftl/ftl_nv_cache.o 00:03:51.888 CC lib/iscsi/conn.o 00:03:51.888 CC lib/vhost/vhost.o 00:03:51.888 CC lib/vhost/vhost_rpc.o 00:03:52.161 CC lib/iscsi/init_grp.o 00:03:52.161 CC lib/ftl/ftl_band.o 00:03:52.161 CC lib/ftl/ftl_band_ops.o 00:03:52.161 CC lib/ftl/ftl_writer.o 00:03:52.161 CC lib/iscsi/iscsi.o 00:03:52.161 CC lib/ftl/ftl_rq.o 00:03:52.420 CC lib/ftl/ftl_reloc.o 00:03:52.420 CC lib/ftl/ftl_l2p_cache.o 00:03:52.420 CC lib/vhost/vhost_scsi.o 00:03:52.420 CC lib/iscsi/md5.o 00:03:52.420 CC lib/iscsi/param.o 00:03:52.420 CC lib/ftl/ftl_p2l.o 00:03:52.679 CC lib/vhost/vhost_blk.o 00:03:52.679 CC lib/iscsi/portal_grp.o 00:03:52.679 CC lib/ftl/mngt/ftl_mngt.o 00:03:52.679 CC lib/iscsi/tgt_node.o 00:03:52.938 CC lib/iscsi/iscsi_subsystem.o 00:03:52.938 CC lib/iscsi/iscsi_rpc.o 00:03:52.938 CC lib/vhost/rte_vhost_user.o 00:03:52.938 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:52.938 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:53.197 CC lib/iscsi/task.o 00:03:53.197 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:53.197 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:53.197 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:53.197 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:53.198 LIB libspdk_nvmf.a 00:03:53.198 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:53.198 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:53.456 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:53.456 SO libspdk_nvmf.so.17.0 00:03:53.456 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:53.456 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:53.456 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:53.456 CC lib/ftl/utils/ftl_conf.o 00:03:53.456 CC lib/ftl/utils/ftl_md.o 00:03:53.456 LIB libspdk_iscsi.a 00:03:53.456 SYMLINK libspdk_nvmf.so 00:03:53.456 CC lib/ftl/utils/ftl_mempool.o 00:03:53.456 CC lib/ftl/utils/ftl_bitmap.o 00:03:53.717 CC lib/ftl/utils/ftl_property.o 00:03:53.717 SO libspdk_iscsi.so.7.0 00:03:53.717 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:53.717 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:53.717 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:53.717 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:53.717 SYMLINK libspdk_iscsi.so 00:03:53.717 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:53.717 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:53.717 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:53.717 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:53.717 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:53.717 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:53.717 LIB libspdk_vhost.a 00:03:53.717 CC lib/ftl/base/ftl_base_dev.o 00:03:53.976 CC lib/ftl/base/ftl_base_bdev.o 00:03:53.976 CC lib/ftl/ftl_trace.o 00:03:53.976 SO libspdk_vhost.so.7.1 00:03:53.976 SYMLINK libspdk_vhost.so 00:03:53.976 LIB libspdk_ftl.a 00:03:54.235 SO libspdk_ftl.so.8.0 00:03:54.494 SYMLINK libspdk_ftl.so 00:03:54.752 CC module/env_dpdk/env_dpdk_rpc.o 00:03:54.752 CC module/scheduler/gscheduler/gscheduler.o 00:03:54.752 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:54.752 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:54.753 CC module/accel/dsa/accel_dsa.o 00:03:54.753 CC module/accel/error/accel_error.o 00:03:54.753 CC module/blob/bdev/blob_bdev.o 00:03:54.753 CC module/sock/posix/posix.o 00:03:54.753 CC module/accel/ioat/accel_ioat.o 00:03:54.753 CC module/accel/iaa/accel_iaa.o 00:03:54.753 LIB libspdk_env_dpdk_rpc.a 00:03:54.753 SO libspdk_env_dpdk_rpc.so.5.0 00:03:55.012 LIB libspdk_scheduler_gscheduler.a 00:03:55.012 SYMLINK libspdk_env_dpdk_rpc.so 00:03:55.012 CC module/accel/error/accel_error_rpc.o 00:03:55.012 SO libspdk_scheduler_gscheduler.so.3.0 00:03:55.012 LIB libspdk_scheduler_dpdk_governor.a 00:03:55.012 SO libspdk_scheduler_dpdk_governor.so.3.0 00:03:55.012 LIB libspdk_scheduler_dynamic.a 00:03:55.012 CC module/accel/ioat/accel_ioat_rpc.o 00:03:55.012 CC module/accel/iaa/accel_iaa_rpc.o 00:03:55.012 SYMLINK libspdk_scheduler_gscheduler.so 00:03:55.012 SO libspdk_scheduler_dynamic.so.3.0 00:03:55.012 CC module/accel/dsa/accel_dsa_rpc.o 00:03:55.012 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:55.012 LIB libspdk_blob_bdev.a 00:03:55.012 SYMLINK libspdk_scheduler_dynamic.so 00:03:55.012 LIB libspdk_accel_error.a 00:03:55.012 SO libspdk_blob_bdev.so.10.1 00:03:55.012 SO libspdk_accel_error.so.1.0 00:03:55.012 LIB libspdk_accel_ioat.a 00:03:55.012 LIB libspdk_accel_iaa.a 00:03:55.271 SYMLINK libspdk_blob_bdev.so 00:03:55.271 LIB libspdk_accel_dsa.a 00:03:55.271 SYMLINK libspdk_accel_error.so 00:03:55.271 SO libspdk_accel_ioat.so.5.0 00:03:55.271 SO libspdk_accel_iaa.so.2.0 00:03:55.271 SO libspdk_accel_dsa.so.4.0 00:03:55.271 SYMLINK libspdk_accel_iaa.so 00:03:55.271 SYMLINK libspdk_accel_ioat.so 00:03:55.271 SYMLINK libspdk_accel_dsa.so 00:03:55.271 CC module/blobfs/bdev/blobfs_bdev.o 00:03:55.271 CC module/bdev/gpt/gpt.o 00:03:55.271 CC module/bdev/null/bdev_null.o 00:03:55.271 CC module/bdev/lvol/vbdev_lvol.o 00:03:55.271 CC module/bdev/error/vbdev_error.o 00:03:55.271 CC module/bdev/delay/vbdev_delay.o 00:03:55.271 CC module/bdev/malloc/bdev_malloc.o 00:03:55.271 CC module/bdev/passthru/vbdev_passthru.o 00:03:55.271 CC module/bdev/nvme/bdev_nvme.o 00:03:55.530 LIB libspdk_sock_posix.a 00:03:55.530 SO libspdk_sock_posix.so.5.0 00:03:55.530 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:55.530 CC module/bdev/gpt/vbdev_gpt.o 00:03:55.530 SYMLINK libspdk_sock_posix.so 00:03:55.530 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:55.530 CC module/bdev/null/bdev_null_rpc.o 00:03:55.530 CC module/bdev/error/vbdev_error_rpc.o 00:03:55.530 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:55.789 LIB libspdk_blobfs_bdev.a 00:03:55.789 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:55.789 SO libspdk_blobfs_bdev.so.5.0 00:03:55.789 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:55.789 LIB libspdk_bdev_null.a 00:03:55.789 LIB libspdk_bdev_error.a 00:03:55.789 SYMLINK libspdk_blobfs_bdev.so 00:03:55.789 CC module/bdev/nvme/nvme_rpc.o 00:03:55.789 SO libspdk_bdev_error.so.5.0 00:03:55.789 SO libspdk_bdev_null.so.5.0 00:03:55.789 LIB libspdk_bdev_gpt.a 00:03:55.789 LIB libspdk_bdev_passthru.a 00:03:55.789 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:55.789 SO libspdk_bdev_gpt.so.5.0 00:03:55.789 SO libspdk_bdev_passthru.so.5.0 00:03:55.789 SYMLINK libspdk_bdev_error.so 00:03:55.789 SYMLINK libspdk_bdev_null.so 00:03:55.789 LIB libspdk_bdev_malloc.a 00:03:55.789 CC module/bdev/nvme/bdev_mdns_client.o 00:03:55.789 SYMLINK libspdk_bdev_passthru.so 00:03:55.789 SO libspdk_bdev_malloc.so.5.0 00:03:55.789 SYMLINK libspdk_bdev_gpt.so 00:03:55.789 LIB libspdk_bdev_delay.a 00:03:56.048 SO libspdk_bdev_delay.so.5.0 00:03:56.048 SYMLINK libspdk_bdev_malloc.so 00:03:56.048 CC module/bdev/raid/bdev_raid.o 00:03:56.048 CC module/bdev/split/vbdev_split.o 00:03:56.048 CC module/bdev/split/vbdev_split_rpc.o 00:03:56.048 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:56.048 SYMLINK libspdk_bdev_delay.so 00:03:56.048 CC module/bdev/raid/bdev_raid_rpc.o 00:03:56.048 CC module/bdev/aio/bdev_aio.o 00:03:56.048 LIB libspdk_bdev_lvol.a 00:03:56.048 SO libspdk_bdev_lvol.so.5.0 00:03:56.048 SYMLINK libspdk_bdev_lvol.so 00:03:56.048 CC module/bdev/aio/bdev_aio_rpc.o 00:03:56.307 LIB libspdk_bdev_split.a 00:03:56.307 CC module/bdev/raid/bdev_raid_sb.o 00:03:56.307 CC module/bdev/iscsi/bdev_iscsi.o 00:03:56.307 CC module/bdev/ftl/bdev_ftl.o 00:03:56.307 SO libspdk_bdev_split.so.5.0 00:03:56.307 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:56.307 SYMLINK libspdk_bdev_split.so 00:03:56.307 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:56.307 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:56.307 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:56.307 LIB libspdk_bdev_aio.a 00:03:56.307 SO libspdk_bdev_aio.so.5.0 00:03:56.307 SYMLINK libspdk_bdev_aio.so 00:03:56.307 CC module/bdev/raid/raid0.o 00:03:56.307 CC module/bdev/raid/raid1.o 00:03:56.566 LIB libspdk_bdev_zone_block.a 00:03:56.566 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:56.566 SO libspdk_bdev_zone_block.so.5.0 00:03:56.566 CC module/bdev/raid/concat.o 00:03:56.566 SYMLINK libspdk_bdev_zone_block.so 00:03:56.566 CC module/bdev/nvme/vbdev_opal.o 00:03:56.566 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:56.566 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:56.566 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:56.566 LIB libspdk_bdev_ftl.a 00:03:56.566 LIB libspdk_bdev_virtio.a 00:03:56.825 LIB libspdk_bdev_iscsi.a 00:03:56.825 SO libspdk_bdev_ftl.so.5.0 00:03:56.825 SO libspdk_bdev_iscsi.so.5.0 00:03:56.825 SO libspdk_bdev_virtio.so.5.0 00:03:56.825 SYMLINK libspdk_bdev_ftl.so 00:03:56.826 SYMLINK libspdk_bdev_iscsi.so 00:03:56.826 LIB libspdk_bdev_raid.a 00:03:56.826 SYMLINK libspdk_bdev_virtio.so 00:03:56.826 SO libspdk_bdev_raid.so.5.0 00:03:56.826 SYMLINK libspdk_bdev_raid.so 00:03:57.393 LIB libspdk_bdev_nvme.a 00:03:57.393 SO libspdk_bdev_nvme.so.6.0 00:03:57.393 SYMLINK libspdk_bdev_nvme.so 00:03:57.651 CC module/event/subsystems/vmd/vmd.o 00:03:57.651 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:57.651 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:57.651 CC module/event/subsystems/iobuf/iobuf.o 00:03:57.651 CC module/event/subsystems/sock/sock.o 00:03:57.651 CC module/event/subsystems/scheduler/scheduler.o 00:03:57.651 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:57.910 LIB libspdk_event_sock.a 00:03:57.910 LIB libspdk_event_scheduler.a 00:03:57.910 LIB libspdk_event_iobuf.a 00:03:57.910 SO libspdk_event_sock.so.4.0 00:03:57.910 SO libspdk_event_scheduler.so.3.0 00:03:57.910 LIB libspdk_event_vmd.a 00:03:57.910 LIB libspdk_event_vhost_blk.a 00:03:57.910 SO libspdk_event_iobuf.so.2.0 00:03:57.910 SO libspdk_event_vmd.so.5.0 00:03:57.910 SO libspdk_event_vhost_blk.so.2.0 00:03:57.910 SYMLINK libspdk_event_scheduler.so 00:03:57.910 SYMLINK libspdk_event_sock.so 00:03:57.910 SYMLINK libspdk_event_iobuf.so 00:03:57.910 SYMLINK libspdk_event_vhost_blk.so 00:03:57.910 SYMLINK libspdk_event_vmd.so 00:03:58.169 CC module/event/subsystems/accel/accel.o 00:03:58.427 LIB libspdk_event_accel.a 00:03:58.427 SO libspdk_event_accel.so.5.0 00:03:58.427 SYMLINK libspdk_event_accel.so 00:03:58.686 CC module/event/subsystems/bdev/bdev.o 00:03:58.686 LIB libspdk_event_bdev.a 00:03:58.686 SO libspdk_event_bdev.so.5.0 00:03:58.944 SYMLINK libspdk_event_bdev.so 00:03:58.944 CC module/event/subsystems/nbd/nbd.o 00:03:58.944 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:58.944 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:58.944 CC module/event/subsystems/ublk/ublk.o 00:03:58.944 CC module/event/subsystems/scsi/scsi.o 00:03:59.203 LIB libspdk_event_ublk.a 00:03:59.203 LIB libspdk_event_nbd.a 00:03:59.203 SO libspdk_event_ublk.so.2.0 00:03:59.203 LIB libspdk_event_scsi.a 00:03:59.203 SO libspdk_event_nbd.so.5.0 00:03:59.203 SO libspdk_event_scsi.so.5.0 00:03:59.203 SYMLINK libspdk_event_ublk.so 00:03:59.203 SYMLINK libspdk_event_nbd.so 00:03:59.203 SYMLINK libspdk_event_scsi.so 00:03:59.203 LIB libspdk_event_nvmf.a 00:03:59.203 SO libspdk_event_nvmf.so.5.0 00:03:59.462 SYMLINK libspdk_event_nvmf.so 00:03:59.462 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:59.462 CC module/event/subsystems/iscsi/iscsi.o 00:03:59.462 LIB libspdk_event_vhost_scsi.a 00:03:59.721 LIB libspdk_event_iscsi.a 00:03:59.721 SO libspdk_event_vhost_scsi.so.2.0 00:03:59.721 SO libspdk_event_iscsi.so.5.0 00:03:59.721 SYMLINK libspdk_event_vhost_scsi.so 00:03:59.721 SYMLINK libspdk_event_iscsi.so 00:03:59.721 SO libspdk.so.5.0 00:03:59.721 SYMLINK libspdk.so 00:03:59.980 CXX app/trace/trace.o 00:03:59.980 CC app/spdk_lspci/spdk_lspci.o 00:03:59.980 CC app/trace_record/trace_record.o 00:03:59.980 CC app/iscsi_tgt/iscsi_tgt.o 00:03:59.980 CC app/nvmf_tgt/nvmf_main.o 00:03:59.980 CC examples/accel/perf/accel_perf.o 00:03:59.980 CC app/spdk_tgt/spdk_tgt.o 00:04:00.238 CC examples/bdev/hello_world/hello_bdev.o 00:04:00.238 CC test/accel/dif/dif.o 00:04:00.238 CC examples/blob/hello_world/hello_blob.o 00:04:00.238 LINK spdk_lspci 00:04:00.238 LINK nvmf_tgt 00:04:00.238 LINK hello_bdev 00:04:00.238 LINK iscsi_tgt 00:04:00.238 LINK spdk_trace_record 00:04:00.497 LINK spdk_tgt 00:04:00.497 CC app/spdk_nvme_perf/perf.o 00:04:00.497 LINK hello_blob 00:04:00.497 LINK spdk_trace 00:04:00.497 LINK dif 00:04:00.497 CC app/spdk_nvme_identify/identify.o 00:04:00.497 LINK accel_perf 00:04:00.497 CC examples/blob/cli/blobcli.o 00:04:00.497 CC app/spdk_nvme_discover/discovery_aer.o 00:04:00.756 CC app/spdk_top/spdk_top.o 00:04:00.756 CC examples/bdev/bdevperf/bdevperf.o 00:04:00.756 CC app/vhost/vhost.o 00:04:00.756 CC app/spdk_dd/spdk_dd.o 00:04:00.756 LINK spdk_nvme_discover 00:04:00.756 CC examples/ioat/perf/perf.o 00:04:00.756 CC test/app/bdev_svc/bdev_svc.o 00:04:00.756 LINK vhost 00:04:01.016 LINK blobcli 00:04:01.016 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:01.016 LINK ioat_perf 00:04:01.016 LINK bdev_svc 00:04:01.016 LINK spdk_dd 00:04:01.016 CC test/app/histogram_perf/histogram_perf.o 00:04:01.016 LINK spdk_nvme_perf 00:04:01.275 CC examples/ioat/verify/verify.o 00:04:01.275 LINK histogram_perf 00:04:01.275 LINK spdk_nvme_identify 00:04:01.275 CC examples/nvme/hello_world/hello_world.o 00:04:01.275 CC examples/sock/hello_world/hello_sock.o 00:04:01.275 CC examples/vmd/lsvmd/lsvmd.o 00:04:01.275 LINK bdevperf 00:04:01.275 LINK nvme_fuzz 00:04:01.275 LINK verify 00:04:01.534 CC examples/nvmf/nvmf/nvmf.o 00:04:01.534 CC examples/util/zipf/zipf.o 00:04:01.534 LINK lsvmd 00:04:01.534 LINK spdk_top 00:04:01.534 LINK hello_world 00:04:01.534 CC examples/thread/thread/thread_ex.o 00:04:01.534 LINK hello_sock 00:04:01.534 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:01.534 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:01.534 LINK zipf 00:04:01.534 CC app/fio/nvme/fio_plugin.o 00:04:01.534 CC examples/vmd/led/led.o 00:04:01.793 CC examples/nvme/reconnect/reconnect.o 00:04:01.793 CC app/fio/bdev/fio_plugin.o 00:04:01.793 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:01.793 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:01.793 LINK thread 00:04:01.794 LINK nvmf 00:04:01.794 CC examples/nvme/arbitration/arbitration.o 00:04:01.794 LINK led 00:04:02.054 CC examples/nvme/hotplug/hotplug.o 00:04:02.054 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:02.054 CC test/app/jsoncat/jsoncat.o 00:04:02.054 LINK reconnect 00:04:02.054 LINK vhost_fuzz 00:04:02.054 LINK arbitration 00:04:02.054 LINK spdk_bdev 00:04:02.054 LINK jsoncat 00:04:02.312 LINK cmb_copy 00:04:02.312 LINK nvme_manage 00:04:02.312 LINK hotplug 00:04:02.312 CC test/app/stub/stub.o 00:04:02.312 LINK spdk_nvme 00:04:02.312 CC examples/nvme/abort/abort.o 00:04:02.312 CC test/bdev/bdevio/bdevio.o 00:04:02.312 CC examples/idxd/perf/perf.o 00:04:02.312 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:02.312 TEST_HEADER include/spdk/accel.h 00:04:02.312 TEST_HEADER include/spdk/accel_module.h 00:04:02.312 TEST_HEADER include/spdk/assert.h 00:04:02.312 TEST_HEADER include/spdk/barrier.h 00:04:02.312 TEST_HEADER include/spdk/base64.h 00:04:02.312 TEST_HEADER include/spdk/bdev.h 00:04:02.312 TEST_HEADER include/spdk/bdev_module.h 00:04:02.312 TEST_HEADER include/spdk/bdev_zone.h 00:04:02.312 TEST_HEADER include/spdk/bit_array.h 00:04:02.312 TEST_HEADER include/spdk/bit_pool.h 00:04:02.312 TEST_HEADER include/spdk/blob_bdev.h 00:04:02.312 LINK stub 00:04:02.312 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:02.312 TEST_HEADER include/spdk/blobfs.h 00:04:02.312 TEST_HEADER include/spdk/blob.h 00:04:02.312 TEST_HEADER include/spdk/conf.h 00:04:02.312 TEST_HEADER include/spdk/config.h 00:04:02.312 TEST_HEADER include/spdk/cpuset.h 00:04:02.312 TEST_HEADER include/spdk/crc16.h 00:04:02.312 TEST_HEADER include/spdk/crc32.h 00:04:02.312 TEST_HEADER include/spdk/crc64.h 00:04:02.312 TEST_HEADER include/spdk/dif.h 00:04:02.312 TEST_HEADER include/spdk/dma.h 00:04:02.572 CC test/blobfs/mkfs/mkfs.o 00:04:02.572 TEST_HEADER include/spdk/endian.h 00:04:02.572 TEST_HEADER include/spdk/env_dpdk.h 00:04:02.572 TEST_HEADER include/spdk/env.h 00:04:02.572 TEST_HEADER include/spdk/event.h 00:04:02.572 TEST_HEADER include/spdk/fd_group.h 00:04:02.572 TEST_HEADER include/spdk/fd.h 00:04:02.572 TEST_HEADER include/spdk/file.h 00:04:02.572 TEST_HEADER include/spdk/ftl.h 00:04:02.572 TEST_HEADER include/spdk/gpt_spec.h 00:04:02.572 TEST_HEADER include/spdk/hexlify.h 00:04:02.572 TEST_HEADER include/spdk/histogram_data.h 00:04:02.572 TEST_HEADER include/spdk/idxd.h 00:04:02.572 TEST_HEADER include/spdk/idxd_spec.h 00:04:02.572 TEST_HEADER include/spdk/init.h 00:04:02.572 TEST_HEADER include/spdk/ioat.h 00:04:02.572 TEST_HEADER include/spdk/ioat_spec.h 00:04:02.572 TEST_HEADER include/spdk/iscsi_spec.h 00:04:02.572 TEST_HEADER include/spdk/json.h 00:04:02.572 TEST_HEADER include/spdk/jsonrpc.h 00:04:02.572 TEST_HEADER include/spdk/likely.h 00:04:02.572 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:02.572 TEST_HEADER include/spdk/log.h 00:04:02.572 TEST_HEADER include/spdk/lvol.h 00:04:02.572 TEST_HEADER include/spdk/memory.h 00:04:02.572 TEST_HEADER include/spdk/mmio.h 00:04:02.572 TEST_HEADER include/spdk/nbd.h 00:04:02.572 TEST_HEADER include/spdk/notify.h 00:04:02.572 TEST_HEADER include/spdk/nvme.h 00:04:02.572 TEST_HEADER include/spdk/nvme_intel.h 00:04:02.572 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:02.572 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:02.572 TEST_HEADER include/spdk/nvme_spec.h 00:04:02.572 TEST_HEADER include/spdk/nvme_zns.h 00:04:02.572 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:02.572 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:02.572 TEST_HEADER include/spdk/nvmf.h 00:04:02.572 TEST_HEADER include/spdk/nvmf_spec.h 00:04:02.572 TEST_HEADER include/spdk/nvmf_transport.h 00:04:02.572 TEST_HEADER include/spdk/opal.h 00:04:02.572 TEST_HEADER include/spdk/opal_spec.h 00:04:02.572 TEST_HEADER include/spdk/pci_ids.h 00:04:02.572 TEST_HEADER include/spdk/pipe.h 00:04:02.572 TEST_HEADER include/spdk/queue.h 00:04:02.572 TEST_HEADER include/spdk/reduce.h 00:04:02.572 TEST_HEADER include/spdk/rpc.h 00:04:02.572 TEST_HEADER include/spdk/scheduler.h 00:04:02.572 TEST_HEADER include/spdk/scsi.h 00:04:02.572 TEST_HEADER include/spdk/scsi_spec.h 00:04:02.572 LINK pmr_persistence 00:04:02.572 TEST_HEADER include/spdk/sock.h 00:04:02.572 TEST_HEADER include/spdk/stdinc.h 00:04:02.572 TEST_HEADER include/spdk/string.h 00:04:02.572 TEST_HEADER include/spdk/thread.h 00:04:02.572 TEST_HEADER include/spdk/trace.h 00:04:02.572 TEST_HEADER include/spdk/trace_parser.h 00:04:02.572 TEST_HEADER include/spdk/tree.h 00:04:02.572 TEST_HEADER include/spdk/ublk.h 00:04:02.572 TEST_HEADER include/spdk/util.h 00:04:02.572 TEST_HEADER include/spdk/uuid.h 00:04:02.572 TEST_HEADER include/spdk/version.h 00:04:02.572 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:02.572 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:02.572 TEST_HEADER include/spdk/vhost.h 00:04:02.572 TEST_HEADER include/spdk/vmd.h 00:04:02.572 TEST_HEADER include/spdk/xor.h 00:04:02.572 TEST_HEADER include/spdk/zipf.h 00:04:02.572 CXX test/cpp_headers/accel.o 00:04:02.572 LINK mkfs 00:04:02.572 LINK abort 00:04:02.572 LINK interrupt_tgt 00:04:02.831 CC test/dma/test_dma/test_dma.o 00:04:02.831 LINK idxd_perf 00:04:02.831 LINK bdevio 00:04:02.831 CXX test/cpp_headers/accel_module.o 00:04:02.831 CXX test/cpp_headers/assert.o 00:04:02.831 CXX test/cpp_headers/barrier.o 00:04:02.831 CXX test/cpp_headers/base64.o 00:04:02.831 CC test/env/mem_callbacks/mem_callbacks.o 00:04:03.090 CC test/event/event_perf/event_perf.o 00:04:03.090 CXX test/cpp_headers/bdev.o 00:04:03.090 CC test/rpc_client/rpc_client_test.o 00:04:03.090 CC test/nvme/aer/aer.o 00:04:03.090 LINK iscsi_fuzz 00:04:03.090 CC test/lvol/esnap/esnap.o 00:04:03.090 LINK test_dma 00:04:03.090 CC test/thread/poller_perf/poller_perf.o 00:04:03.090 LINK event_perf 00:04:03.090 CXX test/cpp_headers/bdev_module.o 00:04:03.090 LINK rpc_client_test 00:04:03.090 LINK poller_perf 00:04:03.349 LINK aer 00:04:03.349 CC test/event/reactor/reactor.o 00:04:03.349 CC test/event/reactor_perf/reactor_perf.o 00:04:03.349 CC test/event/app_repeat/app_repeat.o 00:04:03.349 CXX test/cpp_headers/bdev_zone.o 00:04:03.349 CC test/nvme/reset/reset.o 00:04:03.349 LINK mem_callbacks 00:04:03.349 CC test/event/scheduler/scheduler.o 00:04:03.349 LINK reactor 00:04:03.608 LINK reactor_perf 00:04:03.608 LINK app_repeat 00:04:03.608 CXX test/cpp_headers/bit_array.o 00:04:03.608 CC test/nvme/sgl/sgl.o 00:04:03.608 LINK scheduler 00:04:03.608 CXX test/cpp_headers/bit_pool.o 00:04:03.608 CXX test/cpp_headers/blob_bdev.o 00:04:03.608 CC test/env/vtophys/vtophys.o 00:04:03.867 CC test/nvme/e2edp/nvme_dp.o 00:04:03.867 LINK reset 00:04:03.867 CC test/nvme/overhead/overhead.o 00:04:03.867 LINK vtophys 00:04:03.867 LINK sgl 00:04:03.867 CXX test/cpp_headers/blobfs_bdev.o 00:04:03.867 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:03.867 CC test/env/memory/memory_ut.o 00:04:03.867 CC test/env/pci/pci_ut.o 00:04:04.126 LINK nvme_dp 00:04:04.126 CC test/nvme/err_injection/err_injection.o 00:04:04.126 LINK env_dpdk_post_init 00:04:04.126 CC test/nvme/startup/startup.o 00:04:04.126 CXX test/cpp_headers/blobfs.o 00:04:04.126 LINK overhead 00:04:04.126 CC test/nvme/reserve/reserve.o 00:04:04.386 LINK err_injection 00:04:04.386 CXX test/cpp_headers/blob.o 00:04:04.386 LINK startup 00:04:04.386 CC test/nvme/simple_copy/simple_copy.o 00:04:04.386 CC test/nvme/connect_stress/connect_stress.o 00:04:04.386 LINK pci_ut 00:04:04.386 CXX test/cpp_headers/conf.o 00:04:04.386 LINK reserve 00:04:04.386 CXX test/cpp_headers/config.o 00:04:04.386 CXX test/cpp_headers/cpuset.o 00:04:04.386 CC test/nvme/boot_partition/boot_partition.o 00:04:04.645 LINK connect_stress 00:04:04.645 LINK simple_copy 00:04:04.645 CXX test/cpp_headers/crc16.o 00:04:04.645 CXX test/cpp_headers/crc32.o 00:04:04.645 CC test/nvme/compliance/nvme_compliance.o 00:04:04.645 LINK boot_partition 00:04:04.645 CXX test/cpp_headers/crc64.o 00:04:04.645 CC test/nvme/fused_ordering/fused_ordering.o 00:04:04.645 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:04.645 CXX test/cpp_headers/dif.o 00:04:04.645 CC test/nvme/fdp/fdp.o 00:04:04.903 CXX test/cpp_headers/dma.o 00:04:04.904 CXX test/cpp_headers/endian.o 00:04:04.904 LINK fused_ordering 00:04:04.904 CXX test/cpp_headers/env_dpdk.o 00:04:04.904 LINK doorbell_aers 00:04:04.904 LINK memory_ut 00:04:04.904 LINK nvme_compliance 00:04:04.904 CXX test/cpp_headers/env.o 00:04:04.904 CXX test/cpp_headers/event.o 00:04:05.162 CXX test/cpp_headers/fd_group.o 00:04:05.162 CXX test/cpp_headers/fd.o 00:04:05.162 LINK fdp 00:04:05.162 CXX test/cpp_headers/file.o 00:04:05.162 CXX test/cpp_headers/ftl.o 00:04:05.162 CC test/nvme/cuse/cuse.o 00:04:05.162 CXX test/cpp_headers/gpt_spec.o 00:04:05.162 CXX test/cpp_headers/hexlify.o 00:04:05.162 CXX test/cpp_headers/histogram_data.o 00:04:05.162 CXX test/cpp_headers/idxd.o 00:04:05.162 CXX test/cpp_headers/idxd_spec.o 00:04:05.421 CXX test/cpp_headers/init.o 00:04:05.421 CXX test/cpp_headers/ioat.o 00:04:05.421 CXX test/cpp_headers/ioat_spec.o 00:04:05.421 CXX test/cpp_headers/iscsi_spec.o 00:04:05.421 CXX test/cpp_headers/json.o 00:04:05.421 CXX test/cpp_headers/jsonrpc.o 00:04:05.421 CXX test/cpp_headers/likely.o 00:04:05.421 CXX test/cpp_headers/log.o 00:04:05.421 CXX test/cpp_headers/lvol.o 00:04:05.680 CXX test/cpp_headers/memory.o 00:04:05.680 CXX test/cpp_headers/mmio.o 00:04:05.680 CXX test/cpp_headers/nbd.o 00:04:05.680 CXX test/cpp_headers/notify.o 00:04:05.680 CXX test/cpp_headers/nvme.o 00:04:05.680 CXX test/cpp_headers/nvme_intel.o 00:04:05.680 CXX test/cpp_headers/nvme_ocssd.o 00:04:05.680 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:05.680 CXX test/cpp_headers/nvme_spec.o 00:04:05.680 CXX test/cpp_headers/nvme_zns.o 00:04:05.939 CXX test/cpp_headers/nvmf_cmd.o 00:04:05.939 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:05.939 CXX test/cpp_headers/nvmf.o 00:04:05.939 CXX test/cpp_headers/nvmf_spec.o 00:04:05.939 CXX test/cpp_headers/nvmf_transport.o 00:04:05.939 CXX test/cpp_headers/opal.o 00:04:05.939 CXX test/cpp_headers/opal_spec.o 00:04:05.939 CXX test/cpp_headers/pci_ids.o 00:04:05.939 CXX test/cpp_headers/pipe.o 00:04:05.939 CXX test/cpp_headers/queue.o 00:04:05.939 CXX test/cpp_headers/reduce.o 00:04:06.198 CXX test/cpp_headers/scheduler.o 00:04:06.198 CXX test/cpp_headers/rpc.o 00:04:06.198 CXX test/cpp_headers/scsi.o 00:04:06.198 CXX test/cpp_headers/scsi_spec.o 00:04:06.198 CXX test/cpp_headers/sock.o 00:04:06.198 CXX test/cpp_headers/stdinc.o 00:04:06.457 LINK cuse 00:04:06.457 CXX test/cpp_headers/string.o 00:04:06.457 CXX test/cpp_headers/thread.o 00:04:06.457 CXX test/cpp_headers/trace.o 00:04:06.457 CXX test/cpp_headers/trace_parser.o 00:04:06.457 CXX test/cpp_headers/tree.o 00:04:06.457 CXX test/cpp_headers/ublk.o 00:04:06.457 CXX test/cpp_headers/util.o 00:04:06.457 CXX test/cpp_headers/uuid.o 00:04:06.716 CXX test/cpp_headers/version.o 00:04:06.716 CXX test/cpp_headers/vfio_user_pci.o 00:04:06.716 CXX test/cpp_headers/vfio_user_spec.o 00:04:06.716 CXX test/cpp_headers/vhost.o 00:04:06.716 CXX test/cpp_headers/vmd.o 00:04:06.716 CXX test/cpp_headers/xor.o 00:04:06.716 CXX test/cpp_headers/zipf.o 00:04:08.133 LINK esnap 00:04:09.067 00:04:09.067 real 0m48.810s 00:04:09.067 user 4m36.916s 00:04:09.067 sys 1m3.690s 00:04:09.067 16:23:46 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:04:09.067 16:23:46 -- common/autotest_common.sh@10 -- $ set +x 00:04:09.067 ************************************ 00:04:09.067 END TEST make 00:04:09.067 ************************************ 00:04:09.325 16:23:46 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:09.325 16:23:46 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:09.325 16:23:46 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:09.325 16:23:46 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:09.325 16:23:46 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:09.325 16:23:46 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:09.325 16:23:46 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:09.325 16:23:46 -- scripts/common.sh@335 -- # IFS=.-: 00:04:09.325 16:23:46 -- scripts/common.sh@335 -- # read -ra ver1 00:04:09.325 16:23:46 -- scripts/common.sh@336 -- # IFS=.-: 00:04:09.325 16:23:46 -- scripts/common.sh@336 -- # read -ra ver2 00:04:09.325 16:23:46 -- scripts/common.sh@337 -- # local 'op=<' 00:04:09.325 16:23:46 -- scripts/common.sh@339 -- # ver1_l=2 00:04:09.326 16:23:46 -- scripts/common.sh@340 -- # ver2_l=1 00:04:09.326 16:23:46 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:09.326 16:23:46 -- scripts/common.sh@343 -- # case "$op" in 00:04:09.326 16:23:46 -- scripts/common.sh@344 -- # : 1 00:04:09.326 16:23:46 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:09.326 16:23:46 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:09.326 16:23:46 -- scripts/common.sh@364 -- # decimal 1 00:04:09.326 16:23:46 -- scripts/common.sh@352 -- # local d=1 00:04:09.326 16:23:46 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:09.326 16:23:46 -- scripts/common.sh@354 -- # echo 1 00:04:09.326 16:23:46 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:09.326 16:23:46 -- scripts/common.sh@365 -- # decimal 2 00:04:09.326 16:23:46 -- scripts/common.sh@352 -- # local d=2 00:04:09.326 16:23:46 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:09.326 16:23:46 -- scripts/common.sh@354 -- # echo 2 00:04:09.326 16:23:46 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:09.326 16:23:46 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:09.326 16:23:46 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:09.326 16:23:46 -- scripts/common.sh@367 -- # return 0 00:04:09.326 16:23:46 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:09.326 16:23:46 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:09.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.326 --rc genhtml_branch_coverage=1 00:04:09.326 --rc genhtml_function_coverage=1 00:04:09.326 --rc genhtml_legend=1 00:04:09.326 --rc geninfo_all_blocks=1 00:04:09.326 --rc geninfo_unexecuted_blocks=1 00:04:09.326 00:04:09.326 ' 00:04:09.326 16:23:46 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:09.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.326 --rc genhtml_branch_coverage=1 00:04:09.326 --rc genhtml_function_coverage=1 00:04:09.326 --rc genhtml_legend=1 00:04:09.326 --rc geninfo_all_blocks=1 00:04:09.326 --rc geninfo_unexecuted_blocks=1 00:04:09.326 00:04:09.326 ' 00:04:09.326 16:23:46 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:09.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.326 --rc genhtml_branch_coverage=1 00:04:09.326 --rc genhtml_function_coverage=1 00:04:09.326 --rc genhtml_legend=1 00:04:09.326 --rc geninfo_all_blocks=1 00:04:09.326 --rc geninfo_unexecuted_blocks=1 00:04:09.326 00:04:09.326 ' 00:04:09.326 16:23:46 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:09.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.326 --rc genhtml_branch_coverage=1 00:04:09.326 --rc genhtml_function_coverage=1 00:04:09.326 --rc genhtml_legend=1 00:04:09.326 --rc geninfo_all_blocks=1 00:04:09.326 --rc geninfo_unexecuted_blocks=1 00:04:09.326 00:04:09.326 ' 00:04:09.326 16:23:46 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:09.326 16:23:46 -- nvmf/common.sh@7 -- # uname -s 00:04:09.326 16:23:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:09.326 16:23:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:09.326 16:23:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:09.326 16:23:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:09.326 16:23:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:09.326 16:23:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:09.326 16:23:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:09.326 16:23:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:09.326 16:23:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:09.326 16:23:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:09.326 16:23:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:04:09.326 16:23:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:04:09.326 16:23:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:09.326 16:23:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:09.326 16:23:46 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:04:09.326 16:23:46 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:09.326 16:23:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:09.326 16:23:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:09.326 16:23:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:09.326 16:23:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:09.326 16:23:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:09.326 16:23:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:09.326 16:23:46 -- paths/export.sh@5 -- # export PATH 00:04:09.326 16:23:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:09.326 16:23:46 -- nvmf/common.sh@46 -- # : 0 00:04:09.326 16:23:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:09.326 16:23:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:09.326 16:23:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:09.326 16:23:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:09.326 16:23:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:09.326 16:23:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:09.326 16:23:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:09.326 16:23:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:09.326 16:23:46 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:09.326 16:23:46 -- spdk/autotest.sh@32 -- # uname -s 00:04:09.326 16:23:46 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:09.326 16:23:46 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:09.326 16:23:46 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:09.326 16:23:46 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:09.326 16:23:46 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:09.326 16:23:46 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:09.584 16:23:46 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:09.584 16:23:46 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:09.584 16:23:46 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:09.584 16:23:46 -- spdk/autotest.sh@48 -- # udevadm_pid=61832 00:04:09.584 16:23:46 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:04:09.584 16:23:46 -- spdk/autotest.sh@54 -- # echo 61834 00:04:09.584 16:23:46 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:09.584 16:23:46 -- spdk/autotest.sh@56 -- # echo 61835 00:04:09.584 16:23:46 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:09.584 16:23:46 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:04:09.584 16:23:46 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:09.584 16:23:46 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:04:09.584 16:23:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:09.584 16:23:46 -- common/autotest_common.sh@10 -- # set +x 00:04:09.584 16:23:46 -- spdk/autotest.sh@70 -- # create_test_list 00:04:09.584 16:23:46 -- common/autotest_common.sh@746 -- # xtrace_disable 00:04:09.584 16:23:46 -- common/autotest_common.sh@10 -- # set +x 00:04:09.584 16:23:46 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:09.584 16:23:46 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:09.584 16:23:46 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:04:09.584 16:23:46 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:09.584 16:23:46 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:04:09.584 16:23:46 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:04:09.584 16:23:46 -- common/autotest_common.sh@1450 -- # uname 00:04:09.584 16:23:46 -- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']' 00:04:09.585 16:23:46 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:04:09.585 16:23:46 -- common/autotest_common.sh@1470 -- # uname 00:04:09.585 16:23:46 -- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]] 00:04:09.585 16:23:46 -- spdk/autotest.sh@79 -- # [[ y == y ]] 00:04:09.585 16:23:46 -- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:09.585 lcov: LCOV version 1.15 00:04:09.585 16:23:46 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:17.699 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:04:17.699 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:04:17.699 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:04:17.699 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:04:17.699 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:04:17.699 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:04:35.786 16:24:11 -- spdk/autotest.sh@87 -- # timing_enter pre_cleanup 00:04:35.786 16:24:11 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:35.786 16:24:11 -- common/autotest_common.sh@10 -- # set +x 00:04:35.786 16:24:11 -- spdk/autotest.sh@89 -- # rm -f 00:04:35.786 16:24:11 -- spdk/autotest.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:35.786 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:35.786 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:04:35.786 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:04:35.786 16:24:11 -- spdk/autotest.sh@94 -- # get_zoned_devs 00:04:35.786 16:24:11 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:35.786 16:24:11 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:35.786 16:24:11 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:35.786 16:24:11 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:35.786 16:24:11 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:35.786 16:24:11 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:35.786 16:24:11 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:35.786 16:24:11 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:35.786 16:24:11 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:35.786 16:24:11 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:04:35.786 16:24:11 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:04:35.786 16:24:11 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:35.786 16:24:11 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:35.786 16:24:11 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:35.786 16:24:11 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:04:35.786 16:24:11 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:04:35.786 16:24:11 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:35.786 16:24:11 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:35.786 16:24:11 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:35.786 16:24:11 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:04:35.786 16:24:11 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:04:35.786 16:24:11 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:35.786 16:24:11 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:35.786 16:24:11 -- spdk/autotest.sh@96 -- # (( 0 > 0 )) 00:04:35.786 16:24:11 -- spdk/autotest.sh@108 -- # grep -v p 00:04:35.786 16:24:11 -- spdk/autotest.sh@108 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme1n2 /dev/nvme1n3 00:04:35.786 16:24:11 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:35.786 16:24:11 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:35.786 16:24:11 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1 00:04:35.786 16:24:11 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:04:35.786 16:24:11 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:35.786 No valid GPT data, bailing 00:04:35.786 16:24:12 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:35.786 16:24:12 -- scripts/common.sh@393 -- # pt= 00:04:35.786 16:24:12 -- scripts/common.sh@394 -- # return 1 00:04:35.786 16:24:12 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:35.786 1+0 records in 00:04:35.786 1+0 records out 00:04:35.786 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00551715 s, 190 MB/s 00:04:35.786 16:24:12 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:35.786 16:24:12 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:35.786 16:24:12 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n1 00:04:35.786 16:24:12 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:04:35.786 16:24:12 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:35.786 No valid GPT data, bailing 00:04:35.786 16:24:12 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:35.786 16:24:12 -- scripts/common.sh@393 -- # pt= 00:04:35.786 16:24:12 -- scripts/common.sh@394 -- # return 1 00:04:35.786 16:24:12 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:35.786 1+0 records in 00:04:35.786 1+0 records out 00:04:35.786 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00475816 s, 220 MB/s 00:04:35.786 16:24:12 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:35.786 16:24:12 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:35.786 16:24:12 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n2 00:04:35.786 16:24:12 -- scripts/common.sh@380 -- # local block=/dev/nvme1n2 pt 00:04:35.786 16:24:12 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:35.786 No valid GPT data, bailing 00:04:35.786 16:24:12 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:35.786 16:24:12 -- scripts/common.sh@393 -- # pt= 00:04:35.786 16:24:12 -- scripts/common.sh@394 -- # return 1 00:04:35.786 16:24:12 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:35.786 1+0 records in 00:04:35.786 1+0 records out 00:04:35.786 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00444917 s, 236 MB/s 00:04:35.786 16:24:12 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:35.786 16:24:12 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:35.786 16:24:12 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n3 00:04:35.786 16:24:12 -- scripts/common.sh@380 -- # local block=/dev/nvme1n3 pt 00:04:35.786 16:24:12 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:35.787 No valid GPT data, bailing 00:04:35.787 16:24:12 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:35.787 16:24:12 -- scripts/common.sh@393 -- # pt= 00:04:35.787 16:24:12 -- scripts/common.sh@394 -- # return 1 00:04:35.787 16:24:12 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:35.787 1+0 records in 00:04:35.787 1+0 records out 00:04:35.787 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00466394 s, 225 MB/s 00:04:35.787 16:24:12 -- spdk/autotest.sh@116 -- # sync 00:04:35.787 16:24:12 -- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:35.787 16:24:12 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:35.787 16:24:12 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:37.162 16:24:14 -- spdk/autotest.sh@122 -- # uname -s 00:04:37.162 16:24:14 -- spdk/autotest.sh@122 -- # '[' Linux = Linux ']' 00:04:37.162 16:24:14 -- spdk/autotest.sh@123 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:37.162 16:24:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:37.162 16:24:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:37.162 16:24:14 -- common/autotest_common.sh@10 -- # set +x 00:04:37.162 ************************************ 00:04:37.162 START TEST setup.sh 00:04:37.162 ************************************ 00:04:37.162 16:24:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:37.420 * Looking for test storage... 00:04:37.420 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:37.420 16:24:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:37.420 16:24:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:37.420 16:24:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:37.420 16:24:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:37.420 16:24:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:37.420 16:24:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:37.420 16:24:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:37.420 16:24:14 -- scripts/common.sh@335 -- # IFS=.-: 00:04:37.420 16:24:14 -- scripts/common.sh@335 -- # read -ra ver1 00:04:37.420 16:24:14 -- scripts/common.sh@336 -- # IFS=.-: 00:04:37.420 16:24:14 -- scripts/common.sh@336 -- # read -ra ver2 00:04:37.420 16:24:14 -- scripts/common.sh@337 -- # local 'op=<' 00:04:37.420 16:24:14 -- scripts/common.sh@339 -- # ver1_l=2 00:04:37.420 16:24:14 -- scripts/common.sh@340 -- # ver2_l=1 00:04:37.420 16:24:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:37.420 16:24:14 -- scripts/common.sh@343 -- # case "$op" in 00:04:37.420 16:24:14 -- scripts/common.sh@344 -- # : 1 00:04:37.420 16:24:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:37.420 16:24:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:37.420 16:24:14 -- scripts/common.sh@364 -- # decimal 1 00:04:37.420 16:24:14 -- scripts/common.sh@352 -- # local d=1 00:04:37.420 16:24:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:37.420 16:24:14 -- scripts/common.sh@354 -- # echo 1 00:04:37.420 16:24:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:37.420 16:24:14 -- scripts/common.sh@365 -- # decimal 2 00:04:37.420 16:24:14 -- scripts/common.sh@352 -- # local d=2 00:04:37.420 16:24:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:37.420 16:24:14 -- scripts/common.sh@354 -- # echo 2 00:04:37.420 16:24:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:37.420 16:24:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:37.420 16:24:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:37.420 16:24:14 -- scripts/common.sh@367 -- # return 0 00:04:37.421 16:24:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:37.421 16:24:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:37.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.421 --rc genhtml_branch_coverage=1 00:04:37.421 --rc genhtml_function_coverage=1 00:04:37.421 --rc genhtml_legend=1 00:04:37.421 --rc geninfo_all_blocks=1 00:04:37.421 --rc geninfo_unexecuted_blocks=1 00:04:37.421 00:04:37.421 ' 00:04:37.421 16:24:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:37.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.421 --rc genhtml_branch_coverage=1 00:04:37.421 --rc genhtml_function_coverage=1 00:04:37.421 --rc genhtml_legend=1 00:04:37.421 --rc geninfo_all_blocks=1 00:04:37.421 --rc geninfo_unexecuted_blocks=1 00:04:37.421 00:04:37.421 ' 00:04:37.421 16:24:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:37.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.421 --rc genhtml_branch_coverage=1 00:04:37.421 --rc genhtml_function_coverage=1 00:04:37.421 --rc genhtml_legend=1 00:04:37.421 --rc geninfo_all_blocks=1 00:04:37.421 --rc geninfo_unexecuted_blocks=1 00:04:37.421 00:04:37.421 ' 00:04:37.421 16:24:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:37.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.421 --rc genhtml_branch_coverage=1 00:04:37.421 --rc genhtml_function_coverage=1 00:04:37.421 --rc genhtml_legend=1 00:04:37.421 --rc geninfo_all_blocks=1 00:04:37.421 --rc geninfo_unexecuted_blocks=1 00:04:37.421 00:04:37.421 ' 00:04:37.421 16:24:14 -- setup/test-setup.sh@10 -- # uname -s 00:04:37.421 16:24:14 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:37.421 16:24:14 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:37.421 16:24:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:37.421 16:24:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:37.421 16:24:14 -- common/autotest_common.sh@10 -- # set +x 00:04:37.421 ************************************ 00:04:37.421 START TEST acl 00:04:37.421 ************************************ 00:04:37.421 16:24:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:37.421 * Looking for test storage... 00:04:37.421 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:37.421 16:24:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:37.421 16:24:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:37.421 16:24:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:37.679 16:24:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:37.679 16:24:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:37.679 16:24:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:37.679 16:24:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:37.679 16:24:14 -- scripts/common.sh@335 -- # IFS=.-: 00:04:37.679 16:24:14 -- scripts/common.sh@335 -- # read -ra ver1 00:04:37.679 16:24:14 -- scripts/common.sh@336 -- # IFS=.-: 00:04:37.679 16:24:14 -- scripts/common.sh@336 -- # read -ra ver2 00:04:37.679 16:24:14 -- scripts/common.sh@337 -- # local 'op=<' 00:04:37.679 16:24:14 -- scripts/common.sh@339 -- # ver1_l=2 00:04:37.680 16:24:14 -- scripts/common.sh@340 -- # ver2_l=1 00:04:37.680 16:24:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:37.680 16:24:14 -- scripts/common.sh@343 -- # case "$op" in 00:04:37.680 16:24:14 -- scripts/common.sh@344 -- # : 1 00:04:37.680 16:24:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:37.680 16:24:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:37.680 16:24:14 -- scripts/common.sh@364 -- # decimal 1 00:04:37.680 16:24:14 -- scripts/common.sh@352 -- # local d=1 00:04:37.680 16:24:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:37.680 16:24:14 -- scripts/common.sh@354 -- # echo 1 00:04:37.680 16:24:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:37.680 16:24:14 -- scripts/common.sh@365 -- # decimal 2 00:04:37.680 16:24:14 -- scripts/common.sh@352 -- # local d=2 00:04:37.680 16:24:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:37.680 16:24:14 -- scripts/common.sh@354 -- # echo 2 00:04:37.680 16:24:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:37.680 16:24:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:37.680 16:24:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:37.680 16:24:14 -- scripts/common.sh@367 -- # return 0 00:04:37.680 16:24:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:37.680 16:24:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:37.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.680 --rc genhtml_branch_coverage=1 00:04:37.680 --rc genhtml_function_coverage=1 00:04:37.680 --rc genhtml_legend=1 00:04:37.680 --rc geninfo_all_blocks=1 00:04:37.680 --rc geninfo_unexecuted_blocks=1 00:04:37.680 00:04:37.680 ' 00:04:37.680 16:24:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:37.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.680 --rc genhtml_branch_coverage=1 00:04:37.680 --rc genhtml_function_coverage=1 00:04:37.680 --rc genhtml_legend=1 00:04:37.680 --rc geninfo_all_blocks=1 00:04:37.680 --rc geninfo_unexecuted_blocks=1 00:04:37.680 00:04:37.680 ' 00:04:37.680 16:24:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:37.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.680 --rc genhtml_branch_coverage=1 00:04:37.680 --rc genhtml_function_coverage=1 00:04:37.680 --rc genhtml_legend=1 00:04:37.680 --rc geninfo_all_blocks=1 00:04:37.680 --rc geninfo_unexecuted_blocks=1 00:04:37.680 00:04:37.680 ' 00:04:37.680 16:24:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:37.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.680 --rc genhtml_branch_coverage=1 00:04:37.680 --rc genhtml_function_coverage=1 00:04:37.680 --rc genhtml_legend=1 00:04:37.680 --rc geninfo_all_blocks=1 00:04:37.680 --rc geninfo_unexecuted_blocks=1 00:04:37.680 00:04:37.680 ' 00:04:37.680 16:24:14 -- setup/acl.sh@10 -- # get_zoned_devs 00:04:37.680 16:24:14 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:37.680 16:24:14 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:37.680 16:24:14 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:37.680 16:24:14 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:37.680 16:24:14 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:37.680 16:24:14 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:37.680 16:24:14 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:37.680 16:24:14 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:37.680 16:24:14 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:37.680 16:24:14 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:04:37.680 16:24:14 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:04:37.680 16:24:14 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:37.680 16:24:14 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:37.680 16:24:14 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:37.680 16:24:14 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:04:37.680 16:24:14 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:04:37.680 16:24:14 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:37.680 16:24:14 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:37.680 16:24:14 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:37.680 16:24:14 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:04:37.680 16:24:14 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:04:37.680 16:24:14 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:37.680 16:24:14 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:37.680 16:24:14 -- setup/acl.sh@12 -- # devs=() 00:04:37.680 16:24:14 -- setup/acl.sh@12 -- # declare -a devs 00:04:37.680 16:24:14 -- setup/acl.sh@13 -- # drivers=() 00:04:37.680 16:24:14 -- setup/acl.sh@13 -- # declare -A drivers 00:04:37.680 16:24:14 -- setup/acl.sh@51 -- # setup reset 00:04:37.680 16:24:14 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:37.680 16:24:14 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:38.616 16:24:15 -- setup/acl.sh@52 -- # collect_setup_devs 00:04:38.616 16:24:15 -- setup/acl.sh@16 -- # local dev driver 00:04:38.616 16:24:15 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:38.616 16:24:15 -- setup/acl.sh@15 -- # setup output status 00:04:38.616 16:24:15 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:38.616 16:24:15 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:38.616 Hugepages 00:04:38.616 node hugesize free / total 00:04:38.616 16:24:15 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:38.617 16:24:15 -- setup/acl.sh@19 -- # continue 00:04:38.617 16:24:15 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:38.617 00:04:38.617 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:38.617 16:24:15 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:38.617 16:24:15 -- setup/acl.sh@19 -- # continue 00:04:38.617 16:24:15 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:38.617 16:24:16 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:38.617 16:24:16 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:38.617 16:24:16 -- setup/acl.sh@20 -- # continue 00:04:38.617 16:24:16 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:38.617 16:24:16 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:04:38.617 16:24:16 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:38.617 16:24:16 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:38.617 16:24:16 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:38.617 16:24:16 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:38.617 16:24:16 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:38.876 16:24:16 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:04:38.876 16:24:16 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:38.876 16:24:16 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:38.876 16:24:16 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:38.876 16:24:16 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:38.876 16:24:16 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:38.876 16:24:16 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:04:38.876 16:24:16 -- setup/acl.sh@54 -- # run_test denied denied 00:04:38.876 16:24:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:38.876 16:24:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:38.876 16:24:16 -- common/autotest_common.sh@10 -- # set +x 00:04:38.876 ************************************ 00:04:38.876 START TEST denied 00:04:38.876 ************************************ 00:04:38.876 16:24:16 -- common/autotest_common.sh@1114 -- # denied 00:04:38.876 16:24:16 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:04:38.876 16:24:16 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:04:38.876 16:24:16 -- setup/acl.sh@38 -- # setup output config 00:04:38.876 16:24:16 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:38.876 16:24:16 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:39.814 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:04:39.814 16:24:17 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:04:39.814 16:24:17 -- setup/acl.sh@28 -- # local dev driver 00:04:39.814 16:24:17 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:39.814 16:24:17 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:04:39.814 16:24:17 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:04:39.814 16:24:17 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:39.814 16:24:17 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:39.814 16:24:17 -- setup/acl.sh@41 -- # setup reset 00:04:39.814 16:24:17 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:39.814 16:24:17 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:40.382 ************************************ 00:04:40.382 END TEST denied 00:04:40.382 ************************************ 00:04:40.382 00:04:40.382 real 0m1.566s 00:04:40.382 user 0m0.634s 00:04:40.382 sys 0m0.907s 00:04:40.382 16:24:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:40.382 16:24:17 -- common/autotest_common.sh@10 -- # set +x 00:04:40.382 16:24:17 -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:40.382 16:24:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:40.382 16:24:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:40.382 16:24:17 -- common/autotest_common.sh@10 -- # set +x 00:04:40.382 ************************************ 00:04:40.382 START TEST allowed 00:04:40.382 ************************************ 00:04:40.382 16:24:17 -- common/autotest_common.sh@1114 -- # allowed 00:04:40.382 16:24:17 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:04:40.382 16:24:17 -- setup/acl.sh@45 -- # setup output config 00:04:40.382 16:24:17 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:04:40.383 16:24:17 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:40.383 16:24:17 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:41.321 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:41.321 16:24:18 -- setup/acl.sh@47 -- # verify 0000:00:07.0 00:04:41.321 16:24:18 -- setup/acl.sh@28 -- # local dev driver 00:04:41.321 16:24:18 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:41.321 16:24:18 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:04:41.321 16:24:18 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:04:41.321 16:24:18 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:41.321 16:24:18 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:41.321 16:24:18 -- setup/acl.sh@48 -- # setup reset 00:04:41.321 16:24:18 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:41.321 16:24:18 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:42.257 ************************************ 00:04:42.257 END TEST allowed 00:04:42.257 ************************************ 00:04:42.257 00:04:42.257 real 0m1.637s 00:04:42.257 user 0m0.739s 00:04:42.257 sys 0m0.908s 00:04:42.257 16:24:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:42.257 16:24:19 -- common/autotest_common.sh@10 -- # set +x 00:04:42.257 ************************************ 00:04:42.257 END TEST acl 00:04:42.257 ************************************ 00:04:42.257 00:04:42.257 real 0m4.695s 00:04:42.257 user 0m2.049s 00:04:42.257 sys 0m2.663s 00:04:42.257 16:24:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:42.257 16:24:19 -- common/autotest_common.sh@10 -- # set +x 00:04:42.257 16:24:19 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:42.257 16:24:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:42.257 16:24:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:42.257 16:24:19 -- common/autotest_common.sh@10 -- # set +x 00:04:42.257 ************************************ 00:04:42.257 START TEST hugepages 00:04:42.257 ************************************ 00:04:42.257 16:24:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:42.257 * Looking for test storage... 00:04:42.257 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:42.257 16:24:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:42.257 16:24:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:42.257 16:24:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:42.257 16:24:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:42.257 16:24:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:42.257 16:24:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:42.258 16:24:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:42.258 16:24:19 -- scripts/common.sh@335 -- # IFS=.-: 00:04:42.258 16:24:19 -- scripts/common.sh@335 -- # read -ra ver1 00:04:42.258 16:24:19 -- scripts/common.sh@336 -- # IFS=.-: 00:04:42.258 16:24:19 -- scripts/common.sh@336 -- # read -ra ver2 00:04:42.258 16:24:19 -- scripts/common.sh@337 -- # local 'op=<' 00:04:42.258 16:24:19 -- scripts/common.sh@339 -- # ver1_l=2 00:04:42.258 16:24:19 -- scripts/common.sh@340 -- # ver2_l=1 00:04:42.258 16:24:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:42.258 16:24:19 -- scripts/common.sh@343 -- # case "$op" in 00:04:42.258 16:24:19 -- scripts/common.sh@344 -- # : 1 00:04:42.258 16:24:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:42.258 16:24:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:42.258 16:24:19 -- scripts/common.sh@364 -- # decimal 1 00:04:42.258 16:24:19 -- scripts/common.sh@352 -- # local d=1 00:04:42.258 16:24:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:42.258 16:24:19 -- scripts/common.sh@354 -- # echo 1 00:04:42.258 16:24:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:42.258 16:24:19 -- scripts/common.sh@365 -- # decimal 2 00:04:42.258 16:24:19 -- scripts/common.sh@352 -- # local d=2 00:04:42.258 16:24:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:42.258 16:24:19 -- scripts/common.sh@354 -- # echo 2 00:04:42.258 16:24:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:42.258 16:24:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:42.258 16:24:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:42.258 16:24:19 -- scripts/common.sh@367 -- # return 0 00:04:42.258 16:24:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:42.258 16:24:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:42.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.258 --rc genhtml_branch_coverage=1 00:04:42.258 --rc genhtml_function_coverage=1 00:04:42.258 --rc genhtml_legend=1 00:04:42.258 --rc geninfo_all_blocks=1 00:04:42.258 --rc geninfo_unexecuted_blocks=1 00:04:42.258 00:04:42.258 ' 00:04:42.258 16:24:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:42.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.258 --rc genhtml_branch_coverage=1 00:04:42.258 --rc genhtml_function_coverage=1 00:04:42.258 --rc genhtml_legend=1 00:04:42.258 --rc geninfo_all_blocks=1 00:04:42.258 --rc geninfo_unexecuted_blocks=1 00:04:42.258 00:04:42.258 ' 00:04:42.258 16:24:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:42.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.258 --rc genhtml_branch_coverage=1 00:04:42.258 --rc genhtml_function_coverage=1 00:04:42.258 --rc genhtml_legend=1 00:04:42.258 --rc geninfo_all_blocks=1 00:04:42.258 --rc geninfo_unexecuted_blocks=1 00:04:42.258 00:04:42.258 ' 00:04:42.258 16:24:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:42.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.258 --rc genhtml_branch_coverage=1 00:04:42.258 --rc genhtml_function_coverage=1 00:04:42.258 --rc genhtml_legend=1 00:04:42.258 --rc geninfo_all_blocks=1 00:04:42.258 --rc geninfo_unexecuted_blocks=1 00:04:42.258 00:04:42.258 ' 00:04:42.258 16:24:19 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:42.258 16:24:19 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:42.258 16:24:19 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:42.258 16:24:19 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:42.258 16:24:19 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:42.258 16:24:19 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:42.258 16:24:19 -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:42.258 16:24:19 -- setup/common.sh@18 -- # local node= 00:04:42.258 16:24:19 -- setup/common.sh@19 -- # local var val 00:04:42.258 16:24:19 -- setup/common.sh@20 -- # local mem_f mem 00:04:42.258 16:24:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.258 16:24:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.258 16:24:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.258 16:24:19 -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.258 16:24:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.258 16:24:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.258 16:24:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 4408776 kB' 'MemAvailable: 7338328 kB' 'Buffers: 2684 kB' 'Cached: 3130072 kB' 'SwapCached: 0 kB' 'Active: 495860 kB' 'Inactive: 2753092 kB' 'Active(anon): 126708 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2753092 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 308 kB' 'Writeback: 0 kB' 'AnonPages: 117908 kB' 'Mapped: 51004 kB' 'Shmem: 10512 kB' 'KReclaimable: 88508 kB' 'Slab: 191176 kB' 'SReclaimable: 88508 kB' 'SUnreclaim: 102668 kB' 'KernelStack: 6736 kB' 'PageTables: 4508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12411008 kB' 'Committed_AS: 307380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55432 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 206700 kB' 'DirectMap2M: 6084608 kB' 'DirectMap1G: 8388608 kB' 00:04:42.258 16:24:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.258 16:24:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.258 16:24:19 -- setup/common.sh@32 -- # continue 00:04:42.258 16:24:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.258 16:24:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.258 16:24:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.258 16:24:19 -- setup/common.sh@32 -- # continue 00:04:42.258 16:24:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.258 16:24:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.258 16:24:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.258 16:24:19 -- setup/common.sh@32 -- # continue 00:04:42.258 16:24:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.258 16:24:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.258 16:24:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.258 16:24:19 -- setup/common.sh@32 -- # continue 00:04:42.258 16:24:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.258 16:24:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.258 16:24:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.258 16:24:19 -- setup/common.sh@32 -- # continue 00:04:42.258 16:24:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.258 16:24:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.258 16:24:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.258 16:24:19 -- setup/common.sh@32 -- # continue 00:04:42.258 16:24:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.258 16:24:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.518 16:24:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.518 16:24:19 -- setup/common.sh@32 -- # continue 00:04:42.518 16:24:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.518 16:24:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.518 16:24:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.518 16:24:19 -- setup/common.sh@32 -- # continue 00:04:42.518 16:24:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.518 16:24:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.518 16:24:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.518 16:24:19 -- setup/common.sh@32 -- # continue 00:04:42.518 16:24:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.518 16:24:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.519 16:24:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.519 16:24:19 -- setup/common.sh@32 -- # continue 00:04:42.519 16:24:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.519 16:24:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.519 16:24:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.519 16:24:19 -- setup/common.sh@32 -- # continue 00:04:42.519 16:24:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.519 16:24:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.519 16:24:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.519 16:24:19 -- setup/common.sh@32 -- # continue 00:04:42.519 16:24:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.519 16:24:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.519 16:24:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.519 16:24:19 -- setup/common.sh@32 -- # continue 00:04:42.519 16:24:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.519 16:24:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.519 16:24:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.519 16:24:19 -- setup/common.sh@32 -- # continue 00:04:42.519 16:24:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.519 16:24:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.519 16:24:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.519 16:24:19 -- setup/common.sh@32 -- # continue 00:04:42.519 16:24:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.519 16:24:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.519 16:24:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.519 16:24:19 -- setup/common.sh@32 -- # continue 00:04:42.519 16:24:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.519 16:24:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.519 16:24:19 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.519 16:24:19 -- setup/common.sh@32 -- # continue 00:04:42.519 16:24:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.519 16:24:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.519 16:24:19 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.519 16:24:19 -- setup/common.sh@32 -- # continue 00:04:42.519 16:24:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.519 16:24:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.519 16:24:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.519 16:24:19 -- setup/common.sh@32 -- # continue 00:04:42.519 16:24:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.519 16:24:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.519 16:24:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.519 16:24:19 -- setup/common.sh@32 -- # continue 00:04:42.519 16:24:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.519 16:24:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.519 16:24:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.519 16:24:19 -- setup/common.sh@32 -- # continue 00:04:42.519 16:24:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.519 16:24:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.519 16:24:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.519 16:24:19 -- setup/common.sh@32 -- # continue 00:04:42.519 16:24:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.519 16:24:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.519 16:24:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.519 16:24:19 -- setup/common.sh@32 -- # continue 00:04:42.519 16:24:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.519 16:24:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.519 16:24:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.519 16:24:19 -- setup/common.sh@32 -- # continue 00:04:42.519 16:24:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.519 16:24:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.519 16:24:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.519 16:24:19 -- setup/common.sh@32 -- # continue 00:04:42.519 16:24:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.519 16:24:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.519 16:24:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.519 16:24:19 -- setup/common.sh@32 -- # continue 00:04:42.519 16:24:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.519 16:24:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.519 16:24:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.519 16:24:19 -- setup/common.sh@32 -- # continue 00:04:42.519 16:24:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.519 16:24:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.519 16:24:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.519 16:24:19 -- setup/common.sh@32 -- # continue 00:04:42.519 16:24:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.519 16:24:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.519 16:24:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.519 16:24:19 -- setup/common.sh@32 -- # continue 00:04:42.519 16:24:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.519 16:24:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.519 16:24:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.519 16:24:19 -- setup/common.sh@32 -- # continue 00:04:42.519 16:24:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.519 16:24:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.519 16:24:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.519 16:24:19 -- setup/common.sh@32 -- # continue 00:04:42.519 16:24:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.519 16:24:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.519 16:24:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.519 16:24:19 -- setup/common.sh@32 -- # continue 00:04:42.519 16:24:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.519 16:24:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.519 16:24:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.519 16:24:19 -- setup/common.sh@32 -- # continue 00:04:42.519 16:24:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.519 16:24:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.519 16:24:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.519 16:24:19 -- setup/common.sh@32 -- # continue 00:04:42.519 16:24:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.519 16:24:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.519 16:24:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.519 16:24:19 -- setup/common.sh@32 -- # continue 00:04:42.519 16:24:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.519 16:24:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.519 16:24:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.519 16:24:19 -- setup/common.sh@32 -- # continue 00:04:42.519 16:24:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.519 16:24:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.519 16:24:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.519 16:24:19 -- setup/common.sh@32 -- # continue 00:04:42.519 16:24:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.519 16:24:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.519 16:24:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.519 16:24:19 -- setup/common.sh@32 -- # continue 00:04:42.519 16:24:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.519 16:24:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.519 16:24:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.519 16:24:19 -- setup/common.sh@32 -- # continue 00:04:42.519 16:24:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.519 16:24:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.519 16:24:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.519 16:24:19 -- setup/common.sh@32 -- # continue 00:04:42.519 16:24:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.519 16:24:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.519 16:24:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.519 16:24:19 -- setup/common.sh@32 -- # continue 00:04:42.519 16:24:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.519 16:24:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.519 16:24:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.519 16:24:19 -- setup/common.sh@32 -- # continue 00:04:42.519 16:24:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.519 16:24:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.520 16:24:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.520 16:24:19 -- setup/common.sh@32 -- # continue 00:04:42.520 16:24:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.520 16:24:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.520 16:24:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.520 16:24:19 -- setup/common.sh@32 -- # continue 00:04:42.520 16:24:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.520 16:24:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.520 16:24:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.520 16:24:19 -- setup/common.sh@32 -- # continue 00:04:42.520 16:24:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.520 16:24:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.520 16:24:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.520 16:24:19 -- setup/common.sh@32 -- # continue 00:04:42.520 16:24:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.520 16:24:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.520 16:24:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.520 16:24:19 -- setup/common.sh@32 -- # continue 00:04:42.520 16:24:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.520 16:24:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.520 16:24:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.520 16:24:19 -- setup/common.sh@32 -- # continue 00:04:42.520 16:24:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.520 16:24:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.520 16:24:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.520 16:24:19 -- setup/common.sh@32 -- # continue 00:04:42.520 16:24:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.520 16:24:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.520 16:24:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.520 16:24:19 -- setup/common.sh@32 -- # continue 00:04:42.520 16:24:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.520 16:24:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.520 16:24:19 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.520 16:24:19 -- setup/common.sh@32 -- # continue 00:04:42.520 16:24:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.520 16:24:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.520 16:24:19 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.520 16:24:19 -- setup/common.sh@32 -- # continue 00:04:42.520 16:24:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.520 16:24:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.520 16:24:19 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:42.520 16:24:19 -- setup/common.sh@33 -- # echo 2048 00:04:42.520 16:24:19 -- setup/common.sh@33 -- # return 0 00:04:42.520 16:24:19 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:42.520 16:24:19 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:42.520 16:24:19 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:42.520 16:24:19 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:42.520 16:24:19 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:42.520 16:24:19 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:42.520 16:24:19 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:42.520 16:24:19 -- setup/hugepages.sh@207 -- # get_nodes 00:04:42.520 16:24:19 -- setup/hugepages.sh@27 -- # local node 00:04:42.520 16:24:19 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:42.520 16:24:19 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:42.520 16:24:19 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:42.520 16:24:19 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:42.520 16:24:19 -- setup/hugepages.sh@208 -- # clear_hp 00:04:42.520 16:24:19 -- setup/hugepages.sh@37 -- # local node hp 00:04:42.520 16:24:19 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:42.520 16:24:19 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:42.520 16:24:19 -- setup/hugepages.sh@41 -- # echo 0 00:04:42.520 16:24:19 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:42.520 16:24:19 -- setup/hugepages.sh@41 -- # echo 0 00:04:42.520 16:24:19 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:42.520 16:24:19 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:42.520 16:24:19 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:42.520 16:24:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:42.520 16:24:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:42.520 16:24:19 -- common/autotest_common.sh@10 -- # set +x 00:04:42.520 ************************************ 00:04:42.520 START TEST default_setup 00:04:42.520 ************************************ 00:04:42.520 16:24:19 -- common/autotest_common.sh@1114 -- # default_setup 00:04:42.520 16:24:19 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:42.520 16:24:19 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:42.520 16:24:19 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:42.520 16:24:19 -- setup/hugepages.sh@51 -- # shift 00:04:42.520 16:24:19 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:42.520 16:24:19 -- setup/hugepages.sh@52 -- # local node_ids 00:04:42.520 16:24:19 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:42.520 16:24:19 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:42.520 16:24:19 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:42.520 16:24:19 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:42.520 16:24:19 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:42.520 16:24:19 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:42.520 16:24:19 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:42.520 16:24:19 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:42.520 16:24:19 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:42.520 16:24:19 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:42.520 16:24:19 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:42.520 16:24:19 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:42.520 16:24:19 -- setup/hugepages.sh@73 -- # return 0 00:04:42.520 16:24:19 -- setup/hugepages.sh@137 -- # setup output 00:04:42.520 16:24:19 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:42.520 16:24:19 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:43.088 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:43.350 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:43.350 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:43.350 16:24:20 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:43.350 16:24:20 -- setup/hugepages.sh@89 -- # local node 00:04:43.350 16:24:20 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:43.350 16:24:20 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:43.350 16:24:20 -- setup/hugepages.sh@92 -- # local surp 00:04:43.350 16:24:20 -- setup/hugepages.sh@93 -- # local resv 00:04:43.350 16:24:20 -- setup/hugepages.sh@94 -- # local anon 00:04:43.350 16:24:20 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:43.350 16:24:20 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:43.350 16:24:20 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:43.350 16:24:20 -- setup/common.sh@18 -- # local node= 00:04:43.350 16:24:20 -- setup/common.sh@19 -- # local var val 00:04:43.350 16:24:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:43.350 16:24:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.350 16:24:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.350 16:24:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.350 16:24:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.350 16:24:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.350 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 16:24:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6508308 kB' 'MemAvailable: 9437732 kB' 'Buffers: 2684 kB' 'Cached: 3130068 kB' 'SwapCached: 0 kB' 'Active: 497512 kB' 'Inactive: 2753100 kB' 'Active(anon): 128360 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2753100 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119452 kB' 'Mapped: 51016 kB' 'Shmem: 10492 kB' 'KReclaimable: 88236 kB' 'Slab: 190892 kB' 'SReclaimable: 88236 kB' 'SUnreclaim: 102656 kB' 'KernelStack: 6704 kB' 'PageTables: 4428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 308624 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55448 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 206700 kB' 'DirectMap2M: 6084608 kB' 'DirectMap1G: 8388608 kB' 00:04:43.350 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 16:24:20 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.350 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.350 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 16:24:20 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.350 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.350 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 16:24:20 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.350 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.350 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 16:24:20 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.350 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.350 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 16:24:20 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.350 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.350 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 16:24:20 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.350 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.350 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 16:24:20 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.350 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.350 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 16:24:20 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.350 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.350 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 16:24:20 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.350 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.350 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 16:24:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.350 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.350 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 16:24:20 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.350 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.350 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 16:24:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.350 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.350 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 16:24:20 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.350 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.350 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 16:24:20 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.351 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.351 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 16:24:20 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.351 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.351 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 16:24:20 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.351 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.351 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 16:24:20 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.351 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.351 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 16:24:20 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.351 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.351 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 16:24:20 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.351 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.351 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 16:24:20 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.351 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.351 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 16:24:20 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.351 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.351 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 16:24:20 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.351 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.351 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 16:24:20 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.351 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.351 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 16:24:20 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.351 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.351 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 16:24:20 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.351 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.351 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 16:24:20 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.351 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.351 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 16:24:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.351 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.351 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 16:24:20 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.351 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.351 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 16:24:20 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.351 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.351 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 16:24:20 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.351 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.351 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 16:24:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.351 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.351 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 16:24:20 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.351 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.351 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 16:24:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.351 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.351 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 16:24:20 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.351 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.351 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 16:24:20 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.351 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.351 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 16:24:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.351 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.351 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 16:24:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.351 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.351 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 16:24:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.351 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.351 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 16:24:20 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.351 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.351 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 16:24:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.351 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.351 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 16:24:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.351 16:24:20 -- setup/common.sh@33 -- # echo 0 00:04:43.351 16:24:20 -- setup/common.sh@33 -- # return 0 00:04:43.351 16:24:20 -- setup/hugepages.sh@97 -- # anon=0 00:04:43.351 16:24:20 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:43.351 16:24:20 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:43.351 16:24:20 -- setup/common.sh@18 -- # local node= 00:04:43.351 16:24:20 -- setup/common.sh@19 -- # local var val 00:04:43.351 16:24:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:43.351 16:24:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.351 16:24:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.351 16:24:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.351 16:24:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.351 16:24:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.351 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 16:24:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6508308 kB' 'MemAvailable: 9437732 kB' 'Buffers: 2684 kB' 'Cached: 3130068 kB' 'SwapCached: 0 kB' 'Active: 497388 kB' 'Inactive: 2753100 kB' 'Active(anon): 128236 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2753100 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119312 kB' 'Mapped: 50896 kB' 'Shmem: 10492 kB' 'KReclaimable: 88236 kB' 'Slab: 190892 kB' 'SReclaimable: 88236 kB' 'SUnreclaim: 102656 kB' 'KernelStack: 6672 kB' 'PageTables: 4324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 308624 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55432 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 206700 kB' 'DirectMap2M: 6084608 kB' 'DirectMap1G: 8388608 kB' 00:04:43.351 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 16:24:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.351 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.351 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 16:24:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.351 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.351 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 16:24:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.351 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.351 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 16:24:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.351 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.351 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.351 16:24:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.351 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.351 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.351 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.352 16:24:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.352 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.352 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.352 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.352 16:24:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.352 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.352 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.352 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.352 16:24:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.352 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.352 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.352 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.352 16:24:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.352 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.352 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.352 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.352 16:24:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.352 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.352 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.352 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.352 16:24:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.352 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.352 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.352 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.352 16:24:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.352 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.352 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.352 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.352 16:24:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.352 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.352 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.352 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.352 16:24:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.352 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.352 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.352 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.352 16:24:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.352 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.352 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.352 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.352 16:24:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.352 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.352 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.352 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.352 16:24:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.352 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.352 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.352 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.352 16:24:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.352 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.352 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.352 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.352 16:24:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.352 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.352 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.352 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.352 16:24:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.352 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.352 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.352 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.352 16:24:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.352 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.352 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.352 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.352 16:24:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.352 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.352 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.352 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.352 16:24:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.352 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.352 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.352 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.352 16:24:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.352 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.352 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.352 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.352 16:24:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.352 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.352 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.352 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.352 16:24:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.352 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.352 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.352 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.352 16:24:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.352 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.352 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.352 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.352 16:24:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.352 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.352 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.352 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.352 16:24:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.352 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.352 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.352 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.352 16:24:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.352 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.352 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.352 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.352 16:24:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.352 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.352 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.352 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.352 16:24:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.352 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.352 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.352 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.352 16:24:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.352 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.352 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.352 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.352 16:24:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.352 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.352 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.352 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.352 16:24:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.352 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.352 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.352 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.352 16:24:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.352 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.352 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.352 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.352 16:24:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.352 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.352 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.352 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.352 16:24:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.352 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.352 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.352 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.352 16:24:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.352 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.352 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.352 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.352 16:24:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.352 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.352 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.352 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.352 16:24:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.352 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.352 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.352 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.352 16:24:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.352 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.352 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.352 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.353 16:24:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.353 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.353 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.353 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.353 16:24:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.353 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.353 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.353 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.353 16:24:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.353 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.353 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.353 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.353 16:24:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.353 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.353 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.353 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.353 16:24:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.353 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.353 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.353 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.353 16:24:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.353 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.353 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.353 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.353 16:24:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.353 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.353 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.353 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.353 16:24:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.353 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.353 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.353 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.353 16:24:20 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.353 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.353 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.353 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.353 16:24:20 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.353 16:24:20 -- setup/common.sh@33 -- # echo 0 00:04:43.353 16:24:20 -- setup/common.sh@33 -- # return 0 00:04:43.353 16:24:20 -- setup/hugepages.sh@99 -- # surp=0 00:04:43.353 16:24:20 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:43.353 16:24:20 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:43.353 16:24:20 -- setup/common.sh@18 -- # local node= 00:04:43.353 16:24:20 -- setup/common.sh@19 -- # local var val 00:04:43.353 16:24:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:43.353 16:24:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.353 16:24:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.353 16:24:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.353 16:24:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.353 16:24:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.353 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.353 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.353 16:24:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6508308 kB' 'MemAvailable: 9437728 kB' 'Buffers: 2684 kB' 'Cached: 3130064 kB' 'SwapCached: 0 kB' 'Active: 497240 kB' 'Inactive: 2753100 kB' 'Active(anon): 128088 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2753100 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119176 kB' 'Mapped: 50784 kB' 'Shmem: 10488 kB' 'KReclaimable: 88232 kB' 'Slab: 190884 kB' 'SReclaimable: 88232 kB' 'SUnreclaim: 102652 kB' 'KernelStack: 6688 kB' 'PageTables: 4384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 308624 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55416 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 206700 kB' 'DirectMap2M: 6084608 kB' 'DirectMap1G: 8388608 kB' 00:04:43.353 16:24:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.353 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.353 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.353 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.353 16:24:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.353 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.353 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.353 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.353 16:24:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.353 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.353 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.353 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.353 16:24:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.353 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.353 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.353 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.353 16:24:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.353 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.353 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.353 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.353 16:24:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.353 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.353 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.353 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.353 16:24:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.353 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.353 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.353 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.353 16:24:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.353 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.353 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.353 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.353 16:24:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.353 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.353 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.353 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.353 16:24:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.353 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.353 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.353 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.353 16:24:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.353 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.353 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.353 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.353 16:24:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.353 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.353 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.353 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.353 16:24:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.353 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.353 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.353 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.353 16:24:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.353 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.353 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.353 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.353 16:24:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.353 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.353 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.353 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.353 16:24:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.353 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.353 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.353 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.353 16:24:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.353 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.353 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.353 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.353 16:24:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.353 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.353 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.353 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.353 16:24:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.353 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.353 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.353 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.353 16:24:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.353 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.353 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.353 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.353 16:24:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.353 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.353 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.353 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.353 16:24:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.353 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.353 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.353 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.353 16:24:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.353 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.353 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.353 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.354 16:24:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.354 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.354 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.354 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.354 16:24:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.354 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.354 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.354 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.354 16:24:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.354 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.354 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.354 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.354 16:24:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.354 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.354 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.354 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.354 16:24:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.354 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.354 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.354 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.354 16:24:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.354 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.354 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.354 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.354 16:24:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.354 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.354 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.354 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.354 16:24:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.354 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.354 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.354 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.354 16:24:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.354 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.354 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.354 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.354 16:24:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.354 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.354 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.354 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.354 16:24:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.354 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.354 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.354 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.354 16:24:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.354 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.354 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.354 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.354 16:24:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.354 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.354 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.354 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.354 16:24:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.354 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.354 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.354 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.354 16:24:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.354 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.354 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.354 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.354 16:24:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.354 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.354 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.354 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.354 16:24:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.354 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.354 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.354 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.354 16:24:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.354 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.354 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.354 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.354 16:24:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.354 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.354 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.354 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.354 16:24:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.354 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.354 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.354 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.354 16:24:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.354 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.354 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.354 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.354 16:24:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.354 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.354 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.354 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.354 16:24:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.354 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.354 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.354 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.354 16:24:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.354 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.354 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.354 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.354 16:24:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.354 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.354 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.354 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.354 16:24:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.354 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.354 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.354 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.616 16:24:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.616 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.616 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.616 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.616 16:24:20 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.616 16:24:20 -- setup/common.sh@33 -- # echo 0 00:04:43.616 16:24:20 -- setup/common.sh@33 -- # return 0 00:04:43.616 nr_hugepages=1024 00:04:43.616 resv_hugepages=0 00:04:43.616 surplus_hugepages=0 00:04:43.616 anon_hugepages=0 00:04:43.616 16:24:20 -- setup/hugepages.sh@100 -- # resv=0 00:04:43.616 16:24:20 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:43.616 16:24:20 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:43.616 16:24:20 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:43.616 16:24:20 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:43.616 16:24:20 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:43.616 16:24:20 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:43.616 16:24:20 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:43.616 16:24:20 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:43.616 16:24:20 -- setup/common.sh@18 -- # local node= 00:04:43.616 16:24:20 -- setup/common.sh@19 -- # local var val 00:04:43.616 16:24:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:43.616 16:24:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.616 16:24:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.616 16:24:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.616 16:24:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.616 16:24:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.616 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.616 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.616 16:24:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6508308 kB' 'MemAvailable: 9437728 kB' 'Buffers: 2684 kB' 'Cached: 3130064 kB' 'SwapCached: 0 kB' 'Active: 497112 kB' 'Inactive: 2753100 kB' 'Active(anon): 127960 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2753100 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119048 kB' 'Mapped: 50784 kB' 'Shmem: 10488 kB' 'KReclaimable: 88232 kB' 'Slab: 190884 kB' 'SReclaimable: 88232 kB' 'SUnreclaim: 102652 kB' 'KernelStack: 6672 kB' 'PageTables: 4336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 308256 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55416 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 206700 kB' 'DirectMap2M: 6084608 kB' 'DirectMap1G: 8388608 kB' 00:04:43.616 16:24:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.616 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.616 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.616 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.616 16:24:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.616 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.616 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.616 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.616 16:24:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.616 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.616 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.616 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.616 16:24:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.616 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.616 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.616 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.616 16:24:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.616 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.616 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.616 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.616 16:24:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.616 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.616 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.616 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.616 16:24:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.616 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.616 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.616 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.616 16:24:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.616 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.616 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.616 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.616 16:24:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.616 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.616 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.616 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.616 16:24:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.616 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.616 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.616 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.616 16:24:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.616 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.616 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.616 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.616 16:24:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.616 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.616 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.616 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.616 16:24:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.616 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.616 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.616 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.616 16:24:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.616 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.616 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.616 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.616 16:24:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.617 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.617 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.617 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.617 16:24:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.617 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.617 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.617 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.617 16:24:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.617 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.617 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.617 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.617 16:24:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.617 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.617 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.617 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.617 16:24:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.617 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.617 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.617 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.617 16:24:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.617 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.617 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.617 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.617 16:24:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.617 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.617 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.617 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.617 16:24:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.617 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.617 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.617 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.617 16:24:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.617 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.617 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.617 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.617 16:24:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.617 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.617 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.617 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.617 16:24:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.617 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.617 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.617 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.617 16:24:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.617 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.617 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.617 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.617 16:24:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.617 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.617 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.617 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.617 16:24:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.617 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.617 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.617 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.617 16:24:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.617 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.617 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.617 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.617 16:24:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.617 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.617 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.617 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.617 16:24:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.617 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.617 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.617 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.617 16:24:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.617 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.617 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.617 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.617 16:24:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.617 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.617 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.617 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.617 16:24:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.617 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.617 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.617 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.617 16:24:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.617 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.617 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.617 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.617 16:24:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.617 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.617 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.617 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.617 16:24:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.617 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.617 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.617 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.617 16:24:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.617 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.617 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.617 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.617 16:24:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.617 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.617 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.617 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.617 16:24:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.617 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.617 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.617 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.617 16:24:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.617 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.617 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.617 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.617 16:24:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.617 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.617 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.617 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.617 16:24:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.617 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.617 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.617 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.617 16:24:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.617 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.617 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.617 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.617 16:24:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.617 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.617 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.617 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.617 16:24:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.617 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.617 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.617 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.617 16:24:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.617 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.617 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.617 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.617 16:24:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.617 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.617 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.617 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.617 16:24:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.617 16:24:20 -- setup/common.sh@33 -- # echo 1024 00:04:43.617 16:24:20 -- setup/common.sh@33 -- # return 0 00:04:43.617 16:24:20 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:43.617 16:24:20 -- setup/hugepages.sh@112 -- # get_nodes 00:04:43.617 16:24:20 -- setup/hugepages.sh@27 -- # local node 00:04:43.617 16:24:20 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:43.617 16:24:20 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:43.617 16:24:20 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:43.617 16:24:20 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:43.617 16:24:20 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:43.617 16:24:20 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:43.617 16:24:20 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:43.617 16:24:20 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:43.617 16:24:20 -- setup/common.sh@18 -- # local node=0 00:04:43.617 16:24:20 -- setup/common.sh@19 -- # local var val 00:04:43.617 16:24:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:43.617 16:24:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.617 16:24:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:43.617 16:24:20 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:43.617 16:24:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.617 16:24:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.617 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.618 16:24:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6508560 kB' 'MemUsed: 5730552 kB' 'SwapCached: 0 kB' 'Active: 497264 kB' 'Inactive: 2753108 kB' 'Active(anon): 128112 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2753108 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'FilePages: 3132752 kB' 'Mapped: 50992 kB' 'AnonPages: 119284 kB' 'Shmem: 10488 kB' 'KernelStack: 6656 kB' 'PageTables: 4296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88232 kB' 'Slab: 190876 kB' 'SReclaimable: 88232 kB' 'SUnreclaim: 102644 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:43.618 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.618 16:24:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.618 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.618 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.618 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.618 16:24:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.618 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.618 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.618 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.618 16:24:20 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.618 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.618 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.618 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.618 16:24:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.618 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.618 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.618 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.618 16:24:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.618 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.618 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.618 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.618 16:24:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.618 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.618 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.618 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.618 16:24:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.618 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.618 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.618 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.618 16:24:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.618 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.618 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.618 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.618 16:24:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.618 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.618 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.618 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.618 16:24:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.618 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.618 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.618 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.618 16:24:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.618 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.618 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.618 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.618 16:24:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.618 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.618 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.618 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.618 16:24:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.618 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.618 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.618 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.618 16:24:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.618 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.618 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.618 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.618 16:24:20 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.618 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.618 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.618 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.618 16:24:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.618 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.618 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.618 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.618 16:24:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.618 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.618 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.618 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.618 16:24:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.618 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.618 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.618 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.618 16:24:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.618 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.618 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.618 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.618 16:24:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.618 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.618 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.618 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.618 16:24:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.618 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.618 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.618 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.618 16:24:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.618 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.618 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.618 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.618 16:24:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.618 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.618 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.618 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.618 16:24:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.618 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.618 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.618 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.618 16:24:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.618 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.618 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.618 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.618 16:24:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.618 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.618 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.618 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.618 16:24:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.618 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.618 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.618 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.618 16:24:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.618 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.618 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.618 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.618 16:24:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.618 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.618 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.618 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.618 16:24:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.618 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.618 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.618 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.618 16:24:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.618 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.618 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.618 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.618 16:24:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.618 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.618 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.618 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.618 16:24:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.618 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.618 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.618 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.618 16:24:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.618 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.618 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.618 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.618 16:24:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.618 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.618 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.618 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.618 16:24:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.618 16:24:20 -- setup/common.sh@32 -- # continue 00:04:43.618 16:24:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.618 16:24:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.618 16:24:20 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.618 16:24:20 -- setup/common.sh@33 -- # echo 0 00:04:43.618 16:24:20 -- setup/common.sh@33 -- # return 0 00:04:43.618 node0=1024 expecting 1024 00:04:43.618 ************************************ 00:04:43.618 END TEST default_setup 00:04:43.618 ************************************ 00:04:43.618 16:24:20 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:43.619 16:24:20 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:43.619 16:24:20 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:43.619 16:24:20 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:43.619 16:24:20 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:43.619 16:24:20 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:43.619 00:04:43.619 real 0m1.090s 00:04:43.619 user 0m0.502s 00:04:43.619 sys 0m0.505s 00:04:43.619 16:24:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:43.619 16:24:20 -- common/autotest_common.sh@10 -- # set +x 00:04:43.619 16:24:20 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:43.619 16:24:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:43.619 16:24:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:43.619 16:24:20 -- common/autotest_common.sh@10 -- # set +x 00:04:43.619 ************************************ 00:04:43.619 START TEST per_node_1G_alloc 00:04:43.619 ************************************ 00:04:43.619 16:24:20 -- common/autotest_common.sh@1114 -- # per_node_1G_alloc 00:04:43.619 16:24:20 -- setup/hugepages.sh@143 -- # local IFS=, 00:04:43.619 16:24:20 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:43.619 16:24:20 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:43.619 16:24:20 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:43.619 16:24:20 -- setup/hugepages.sh@51 -- # shift 00:04:43.619 16:24:20 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:43.619 16:24:20 -- setup/hugepages.sh@52 -- # local node_ids 00:04:43.619 16:24:20 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:43.619 16:24:20 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:43.619 16:24:20 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:43.619 16:24:20 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:43.619 16:24:20 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:43.619 16:24:20 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:43.619 16:24:20 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:43.619 16:24:20 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:43.619 16:24:20 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:43.619 16:24:20 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:43.619 16:24:20 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:43.619 16:24:20 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:43.619 16:24:20 -- setup/hugepages.sh@73 -- # return 0 00:04:43.619 16:24:20 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:43.619 16:24:20 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:43.619 16:24:20 -- setup/hugepages.sh@146 -- # setup output 00:04:43.619 16:24:20 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:43.619 16:24:20 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:43.878 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:43.878 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:43.878 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:44.141 16:24:21 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:44.141 16:24:21 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:44.141 16:24:21 -- setup/hugepages.sh@89 -- # local node 00:04:44.141 16:24:21 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:44.141 16:24:21 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:44.141 16:24:21 -- setup/hugepages.sh@92 -- # local surp 00:04:44.141 16:24:21 -- setup/hugepages.sh@93 -- # local resv 00:04:44.141 16:24:21 -- setup/hugepages.sh@94 -- # local anon 00:04:44.141 16:24:21 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:44.141 16:24:21 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:44.141 16:24:21 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:44.141 16:24:21 -- setup/common.sh@18 -- # local node= 00:04:44.141 16:24:21 -- setup/common.sh@19 -- # local var val 00:04:44.141 16:24:21 -- setup/common.sh@20 -- # local mem_f mem 00:04:44.141 16:24:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.141 16:24:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.141 16:24:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.141 16:24:21 -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.141 16:24:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.141 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.141 16:24:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7555260 kB' 'MemAvailable: 10484692 kB' 'Buffers: 2684 kB' 'Cached: 3130068 kB' 'SwapCached: 0 kB' 'Active: 497480 kB' 'Inactive: 2753112 kB' 'Active(anon): 128328 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2753112 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119428 kB' 'Mapped: 50900 kB' 'Shmem: 10488 kB' 'KReclaimable: 88232 kB' 'Slab: 190908 kB' 'SReclaimable: 88232 kB' 'SUnreclaim: 102676 kB' 'KernelStack: 6696 kB' 'PageTables: 4272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 308624 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55416 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 206700 kB' 'DirectMap2M: 6084608 kB' 'DirectMap1G: 8388608 kB' 00:04:44.141 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.141 16:24:21 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.141 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.141 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.141 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.141 16:24:21 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.141 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.141 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.141 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.141 16:24:21 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.141 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.141 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.141 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.141 16:24:21 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.141 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.141 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.141 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.141 16:24:21 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.141 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.141 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.141 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.141 16:24:21 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.141 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.141 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.141 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.141 16:24:21 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.141 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.141 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.141 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.141 16:24:21 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.141 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.141 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.141 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.141 16:24:21 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.141 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.141 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.141 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.141 16:24:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.141 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.141 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.141 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.141 16:24:21 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.141 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.141 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.141 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.141 16:24:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.141 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.141 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.141 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.141 16:24:21 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.141 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.141 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.141 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.141 16:24:21 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.141 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.141 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.141 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.141 16:24:21 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.141 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.141 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.141 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.141 16:24:21 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.141 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.141 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.141 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.141 16:24:21 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.141 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.141 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.141 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.141 16:24:21 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.141 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.141 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.141 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.142 16:24:21 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.142 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.142 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.142 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.142 16:24:21 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.142 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.142 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.142 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.142 16:24:21 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.142 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.142 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.142 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.142 16:24:21 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.142 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.142 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.142 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.142 16:24:21 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.142 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.142 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.142 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.142 16:24:21 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.142 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.142 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.142 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.142 16:24:21 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.142 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.142 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.142 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.142 16:24:21 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.142 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.142 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.142 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.142 16:24:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.142 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.142 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.142 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.142 16:24:21 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.142 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.142 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.142 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.142 16:24:21 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.142 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.142 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.142 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.142 16:24:21 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.142 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.142 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.142 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.142 16:24:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.142 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.142 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.142 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.142 16:24:21 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.142 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.142 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.142 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.142 16:24:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.142 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.142 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.142 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.142 16:24:21 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.142 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.142 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.142 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.142 16:24:21 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.142 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.142 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.142 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.142 16:24:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.142 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.142 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.142 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.142 16:24:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.142 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.142 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.142 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.142 16:24:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.142 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.142 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.142 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.142 16:24:21 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.142 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.142 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.142 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.142 16:24:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.142 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.142 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.142 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.142 16:24:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.142 16:24:21 -- setup/common.sh@33 -- # echo 0 00:04:44.142 16:24:21 -- setup/common.sh@33 -- # return 0 00:04:44.142 16:24:21 -- setup/hugepages.sh@97 -- # anon=0 00:04:44.142 16:24:21 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:44.142 16:24:21 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:44.142 16:24:21 -- setup/common.sh@18 -- # local node= 00:04:44.142 16:24:21 -- setup/common.sh@19 -- # local var val 00:04:44.142 16:24:21 -- setup/common.sh@20 -- # local mem_f mem 00:04:44.142 16:24:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.142 16:24:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.142 16:24:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.142 16:24:21 -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.142 16:24:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.142 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.142 16:24:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7555260 kB' 'MemAvailable: 10484692 kB' 'Buffers: 2684 kB' 'Cached: 3130068 kB' 'SwapCached: 0 kB' 'Active: 497208 kB' 'Inactive: 2753112 kB' 'Active(anon): 128056 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2753112 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119104 kB' 'Mapped: 50784 kB' 'Shmem: 10488 kB' 'KReclaimable: 88232 kB' 'Slab: 190912 kB' 'SReclaimable: 88232 kB' 'SUnreclaim: 102680 kB' 'KernelStack: 6656 kB' 'PageTables: 4292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 308624 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55400 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 206700 kB' 'DirectMap2M: 6084608 kB' 'DirectMap1G: 8388608 kB' 00:04:44.142 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.142 16:24:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.142 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.142 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.142 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.142 16:24:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.142 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.142 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.142 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.142 16:24:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.142 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.142 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.142 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.142 16:24:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.142 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.142 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.142 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.142 16:24:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.142 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.142 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.142 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.142 16:24:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.142 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.142 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.142 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.142 16:24:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.142 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.142 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.142 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.142 16:24:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.142 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.142 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.142 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.142 16:24:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.142 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.142 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.142 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.142 16:24:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.142 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.142 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.142 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.143 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.143 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.144 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.144 16:24:21 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.144 16:24:21 -- setup/common.sh@33 -- # echo 0 00:04:44.144 16:24:21 -- setup/common.sh@33 -- # return 0 00:04:44.144 16:24:21 -- setup/hugepages.sh@99 -- # surp=0 00:04:44.144 16:24:21 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:44.144 16:24:21 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:44.144 16:24:21 -- setup/common.sh@18 -- # local node= 00:04:44.144 16:24:21 -- setup/common.sh@19 -- # local var val 00:04:44.144 16:24:21 -- setup/common.sh@20 -- # local mem_f mem 00:04:44.144 16:24:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.144 16:24:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.144 16:24:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.144 16:24:21 -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.144 16:24:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.144 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.144 16:24:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7555780 kB' 'MemAvailable: 10485212 kB' 'Buffers: 2684 kB' 'Cached: 3130068 kB' 'SwapCached: 0 kB' 'Active: 497244 kB' 'Inactive: 2753112 kB' 'Active(anon): 128092 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2753112 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119216 kB' 'Mapped: 50784 kB' 'Shmem: 10488 kB' 'KReclaimable: 88232 kB' 'Slab: 190884 kB' 'SReclaimable: 88232 kB' 'SUnreclaim: 102652 kB' 'KernelStack: 6688 kB' 'PageTables: 4388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 308624 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55400 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 206700 kB' 'DirectMap2M: 6084608 kB' 'DirectMap1G: 8388608 kB' 00:04:44.144 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.144 16:24:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.144 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.144 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.144 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.144 16:24:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.144 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.144 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.144 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.144 16:24:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.144 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.144 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.144 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.144 16:24:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.144 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.144 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.144 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.144 16:24:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.144 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.144 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.144 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.144 16:24:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.144 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.144 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.144 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.144 16:24:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.144 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.144 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.144 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.144 16:24:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.144 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.144 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.144 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.144 16:24:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.144 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.144 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.144 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.144 16:24:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.144 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.144 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.144 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.144 16:24:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.144 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.144 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.144 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.144 16:24:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.144 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.144 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.144 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.144 16:24:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.144 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.144 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.144 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.144 16:24:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.144 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.144 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.144 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.144 16:24:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.144 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.144 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.144 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.144 16:24:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.144 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.144 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.144 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.144 16:24:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.144 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.144 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.144 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.144 16:24:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.144 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.144 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.144 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.144 16:24:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.144 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.144 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.144 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.144 16:24:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.144 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.144 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.144 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.144 16:24:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.144 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.144 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.144 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.144 16:24:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.144 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.144 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.144 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.144 16:24:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.144 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.144 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.144 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.144 16:24:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.144 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.144 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.144 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.144 16:24:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.144 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.144 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.144 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.144 16:24:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.144 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.144 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.144 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.144 16:24:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.144 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.144 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.144 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.144 16:24:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.144 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.144 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.144 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.144 16:24:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.144 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.144 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.144 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.144 16:24:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.144 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.144 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.144 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.144 16:24:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.144 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.144 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.144 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.144 16:24:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.144 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.145 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.145 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.145 16:24:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.145 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.145 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.145 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.145 16:24:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.145 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.145 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.145 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.145 16:24:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.145 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.145 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.145 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.145 16:24:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.145 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.145 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.145 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.145 16:24:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.145 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.145 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.145 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.145 16:24:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.145 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.145 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.145 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.145 16:24:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.145 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.145 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.145 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.145 16:24:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.145 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.145 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.145 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.145 16:24:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.145 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.145 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.145 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.145 16:24:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.145 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.145 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.145 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.145 16:24:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.145 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.145 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.145 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.145 16:24:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.145 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.145 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.145 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.145 16:24:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.145 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.145 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.145 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.145 16:24:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.145 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.145 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.145 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.145 16:24:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.145 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.145 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.145 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.145 16:24:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.145 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.145 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.145 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.145 16:24:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.145 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.145 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.145 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.145 16:24:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.145 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.145 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.145 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.145 16:24:21 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.145 16:24:21 -- setup/common.sh@33 -- # echo 0 00:04:44.145 16:24:21 -- setup/common.sh@33 -- # return 0 00:04:44.145 16:24:21 -- setup/hugepages.sh@100 -- # resv=0 00:04:44.145 16:24:21 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:44.145 nr_hugepages=512 00:04:44.145 resv_hugepages=0 00:04:44.145 surplus_hugepages=0 00:04:44.145 anon_hugepages=0 00:04:44.145 16:24:21 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:44.145 16:24:21 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:44.145 16:24:21 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:44.145 16:24:21 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:44.145 16:24:21 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:44.145 16:24:21 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:44.145 16:24:21 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:44.145 16:24:21 -- setup/common.sh@18 -- # local node= 00:04:44.145 16:24:21 -- setup/common.sh@19 -- # local var val 00:04:44.145 16:24:21 -- setup/common.sh@20 -- # local mem_f mem 00:04:44.145 16:24:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.145 16:24:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.145 16:24:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.145 16:24:21 -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.145 16:24:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.145 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.145 16:24:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7556312 kB' 'MemAvailable: 10485744 kB' 'Buffers: 2684 kB' 'Cached: 3130068 kB' 'SwapCached: 0 kB' 'Active: 497392 kB' 'Inactive: 2753112 kB' 'Active(anon): 128240 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2753112 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119384 kB' 'Mapped: 50836 kB' 'Shmem: 10488 kB' 'KReclaimable: 88232 kB' 'Slab: 190884 kB' 'SReclaimable: 88232 kB' 'SUnreclaim: 102652 kB' 'KernelStack: 6704 kB' 'PageTables: 4428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 308624 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55384 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 206700 kB' 'DirectMap2M: 6084608 kB' 'DirectMap1G: 8388608 kB' 00:04:44.145 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.145 16:24:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.145 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.145 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.145 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.145 16:24:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.145 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.145 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.145 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.145 16:24:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.145 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.145 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.145 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.145 16:24:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.145 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.145 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.145 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.145 16:24:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.145 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.145 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.145 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.145 16:24:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.145 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.145 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.145 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.145 16:24:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.145 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.145 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.145 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.145 16:24:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.145 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.145 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.145 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.145 16:24:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.145 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.145 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.145 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.145 16:24:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.146 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.146 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.146 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.146 16:24:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.146 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.146 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.146 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.146 16:24:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.146 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.146 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.146 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.146 16:24:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.146 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.146 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.146 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.146 16:24:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.146 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.146 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.146 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.146 16:24:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.146 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.146 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.146 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.146 16:24:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.146 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.146 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.146 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.146 16:24:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.146 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.146 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.146 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.146 16:24:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.146 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.146 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.146 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.146 16:24:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.146 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.146 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.146 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.146 16:24:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.146 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.146 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.146 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.146 16:24:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.146 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.146 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.146 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.146 16:24:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.146 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.146 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.146 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.146 16:24:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.146 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.146 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.146 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.146 16:24:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.146 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.146 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.146 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.146 16:24:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.146 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.146 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.146 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.146 16:24:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.146 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.146 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.146 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.146 16:24:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.146 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.146 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.146 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.146 16:24:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.146 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.146 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.146 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.146 16:24:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.146 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.146 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.146 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.146 16:24:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.146 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.146 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.146 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.146 16:24:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.146 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.146 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.146 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.146 16:24:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.146 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.146 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.146 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.146 16:24:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.146 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.146 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.146 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.146 16:24:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.146 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.146 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.146 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.146 16:24:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.146 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.146 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.146 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.146 16:24:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.146 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.146 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.146 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.146 16:24:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.146 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.146 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.146 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.146 16:24:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.146 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.146 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.146 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.146 16:24:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.146 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.146 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.146 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.146 16:24:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.146 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.146 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.146 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.146 16:24:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.146 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.146 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.146 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.146 16:24:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.146 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.146 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.146 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.146 16:24:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.146 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.146 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.146 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.146 16:24:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.146 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.146 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.146 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.146 16:24:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.146 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.146 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.146 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.147 16:24:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.147 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.147 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.147 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.147 16:24:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.147 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.147 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.147 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.147 16:24:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.147 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.147 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.147 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.147 16:24:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.147 16:24:21 -- setup/common.sh@33 -- # echo 512 00:04:44.147 16:24:21 -- setup/common.sh@33 -- # return 0 00:04:44.147 16:24:21 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:44.147 16:24:21 -- setup/hugepages.sh@112 -- # get_nodes 00:04:44.147 16:24:21 -- setup/hugepages.sh@27 -- # local node 00:04:44.147 16:24:21 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:44.147 16:24:21 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:44.147 16:24:21 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:44.147 16:24:21 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:44.147 16:24:21 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:44.147 16:24:21 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:44.147 16:24:21 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:44.147 16:24:21 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:44.147 16:24:21 -- setup/common.sh@18 -- # local node=0 00:04:44.147 16:24:21 -- setup/common.sh@19 -- # local var val 00:04:44.147 16:24:21 -- setup/common.sh@20 -- # local mem_f mem 00:04:44.147 16:24:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.147 16:24:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:44.147 16:24:21 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:44.147 16:24:21 -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.147 16:24:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.147 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.147 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.147 16:24:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7556416 kB' 'MemUsed: 4682696 kB' 'SwapCached: 0 kB' 'Active: 497084 kB' 'Inactive: 2753112 kB' 'Active(anon): 127932 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2753112 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'FilePages: 3132752 kB' 'Mapped: 50784 kB' 'AnonPages: 119084 kB' 'Shmem: 10488 kB' 'KernelStack: 6672 kB' 'PageTables: 4336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88232 kB' 'Slab: 190860 kB' 'SReclaimable: 88232 kB' 'SUnreclaim: 102628 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:44.147 16:24:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.147 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.147 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.147 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.147 16:24:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.147 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.147 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.147 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.147 16:24:21 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.147 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.147 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.147 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.147 16:24:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.147 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.147 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.147 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.147 16:24:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.147 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.147 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.147 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.147 16:24:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.147 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.147 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.147 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.147 16:24:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.147 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.147 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.147 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.147 16:24:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.147 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.147 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.147 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.147 16:24:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.147 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.147 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.147 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.147 16:24:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.147 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.147 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.147 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.147 16:24:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.147 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.147 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.147 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.147 16:24:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.147 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.147 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.147 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.147 16:24:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.147 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.147 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.147 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.147 16:24:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.147 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.147 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.147 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.147 16:24:21 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.147 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.147 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.147 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.147 16:24:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.147 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.147 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.147 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.147 16:24:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.147 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.147 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.147 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.147 16:24:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.147 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.147 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.147 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.147 16:24:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.147 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.147 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.147 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.147 16:24:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.147 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.147 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.147 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.147 16:24:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.147 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.147 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.147 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.147 16:24:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.147 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.147 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.147 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.147 16:24:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.147 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.147 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.147 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.147 16:24:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.147 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.147 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.147 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.147 16:24:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.147 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.147 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.147 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.147 16:24:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.147 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.147 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.147 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.147 16:24:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.147 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.147 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.147 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.147 16:24:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.147 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.147 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.147 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.147 16:24:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.147 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.148 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.148 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.148 16:24:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.148 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.148 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.148 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.148 16:24:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.148 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.148 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.148 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.148 16:24:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.148 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.148 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.148 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.148 16:24:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.148 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.148 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.148 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.148 16:24:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.148 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.148 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.148 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.148 16:24:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.148 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.148 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.148 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.148 16:24:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.148 16:24:21 -- setup/common.sh@32 -- # continue 00:04:44.148 16:24:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.148 16:24:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.148 16:24:21 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.148 16:24:21 -- setup/common.sh@33 -- # echo 0 00:04:44.148 16:24:21 -- setup/common.sh@33 -- # return 0 00:04:44.148 node0=512 expecting 512 00:04:44.148 ************************************ 00:04:44.148 END TEST per_node_1G_alloc 00:04:44.148 ************************************ 00:04:44.148 16:24:21 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:44.148 16:24:21 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:44.148 16:24:21 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:44.148 16:24:21 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:44.148 16:24:21 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:44.148 16:24:21 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:44.148 00:04:44.148 real 0m0.653s 00:04:44.148 user 0m0.295s 00:04:44.148 sys 0m0.360s 00:04:44.148 16:24:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:44.148 16:24:21 -- common/autotest_common.sh@10 -- # set +x 00:04:44.440 16:24:21 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:44.440 16:24:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:44.440 16:24:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:44.440 16:24:21 -- common/autotest_common.sh@10 -- # set +x 00:04:44.440 ************************************ 00:04:44.440 START TEST even_2G_alloc 00:04:44.440 ************************************ 00:04:44.440 16:24:21 -- common/autotest_common.sh@1114 -- # even_2G_alloc 00:04:44.440 16:24:21 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:44.440 16:24:21 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:44.440 16:24:21 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:44.440 16:24:21 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:44.440 16:24:21 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:44.440 16:24:21 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:44.440 16:24:21 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:44.440 16:24:21 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:44.440 16:24:21 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:44.440 16:24:21 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:44.440 16:24:21 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:44.440 16:24:21 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:44.440 16:24:21 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:44.440 16:24:21 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:44.440 16:24:21 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:44.440 16:24:21 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:44.440 16:24:21 -- setup/hugepages.sh@83 -- # : 0 00:04:44.440 16:24:21 -- setup/hugepages.sh@84 -- # : 0 00:04:44.440 16:24:21 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:44.440 16:24:21 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:44.440 16:24:21 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:44.440 16:24:21 -- setup/hugepages.sh@153 -- # setup output 00:04:44.440 16:24:21 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:44.440 16:24:21 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:44.734 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:44.734 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:44.734 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:44.734 16:24:22 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:44.734 16:24:22 -- setup/hugepages.sh@89 -- # local node 00:04:44.734 16:24:22 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:44.734 16:24:22 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:44.734 16:24:22 -- setup/hugepages.sh@92 -- # local surp 00:04:44.734 16:24:22 -- setup/hugepages.sh@93 -- # local resv 00:04:44.734 16:24:22 -- setup/hugepages.sh@94 -- # local anon 00:04:44.734 16:24:22 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:44.734 16:24:22 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:44.734 16:24:22 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:44.734 16:24:22 -- setup/common.sh@18 -- # local node= 00:04:44.734 16:24:22 -- setup/common.sh@19 -- # local var val 00:04:44.734 16:24:22 -- setup/common.sh@20 -- # local mem_f mem 00:04:44.734 16:24:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.734 16:24:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.734 16:24:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.734 16:24:22 -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.734 16:24:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.734 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.734 16:24:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6509304 kB' 'MemAvailable: 9438736 kB' 'Buffers: 2684 kB' 'Cached: 3130068 kB' 'SwapCached: 0 kB' 'Active: 497684 kB' 'Inactive: 2753112 kB' 'Active(anon): 128532 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2753112 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119660 kB' 'Mapped: 50900 kB' 'Shmem: 10488 kB' 'KReclaimable: 88232 kB' 'Slab: 190900 kB' 'SReclaimable: 88232 kB' 'SUnreclaim: 102668 kB' 'KernelStack: 6712 kB' 'PageTables: 4576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 308624 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55432 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 206700 kB' 'DirectMap2M: 6084608 kB' 'DirectMap1G: 8388608 kB' 00:04:44.734 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.734 16:24:22 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.734 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.734 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.734 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.734 16:24:22 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.734 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.734 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.734 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.734 16:24:22 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.734 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.734 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.734 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.734 16:24:22 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.734 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.734 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.734 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.734 16:24:22 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.734 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.734 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.734 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.734 16:24:22 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.734 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.734 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.734 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.734 16:24:22 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.734 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.734 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.734 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.734 16:24:22 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.734 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.734 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.734 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.734 16:24:22 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.734 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.734 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.734 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.735 16:24:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.735 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.735 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.735 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.735 16:24:22 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.735 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.735 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.735 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.735 16:24:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.735 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.735 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.735 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.735 16:24:22 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.735 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.735 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.735 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.735 16:24:22 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.735 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.735 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.735 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.735 16:24:22 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.735 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.735 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.735 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.735 16:24:22 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.735 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.735 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.735 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.735 16:24:22 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.735 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.735 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.735 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.735 16:24:22 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.735 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.735 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.735 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.735 16:24:22 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.735 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.735 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.735 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.735 16:24:22 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.735 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.735 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.735 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.735 16:24:22 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.735 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.735 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.735 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.735 16:24:22 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.735 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.735 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.735 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.735 16:24:22 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.735 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.735 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.735 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.735 16:24:22 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.735 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.735 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.735 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.735 16:24:22 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.735 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.735 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.735 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.735 16:24:22 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.735 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.735 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.735 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.735 16:24:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.735 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.735 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.735 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.735 16:24:22 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.735 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.735 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.735 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.735 16:24:22 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.735 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.735 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.735 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.735 16:24:22 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.735 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.735 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.735 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.735 16:24:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.735 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.735 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.735 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.735 16:24:22 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.735 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.735 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.735 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.735 16:24:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.735 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.735 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.735 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.735 16:24:22 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.735 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.735 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.735 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.735 16:24:22 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.735 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.735 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.735 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.735 16:24:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.735 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.735 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.735 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.735 16:24:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.735 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.735 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.735 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.735 16:24:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.735 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.735 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.735 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.735 16:24:22 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.735 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.735 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.735 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.735 16:24:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.735 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.735 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.735 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.735 16:24:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.735 16:24:22 -- setup/common.sh@33 -- # echo 0 00:04:44.735 16:24:22 -- setup/common.sh@33 -- # return 0 00:04:44.735 16:24:22 -- setup/hugepages.sh@97 -- # anon=0 00:04:44.735 16:24:22 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:44.735 16:24:22 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:44.735 16:24:22 -- setup/common.sh@18 -- # local node= 00:04:44.735 16:24:22 -- setup/common.sh@19 -- # local var val 00:04:44.735 16:24:22 -- setup/common.sh@20 -- # local mem_f mem 00:04:44.735 16:24:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.735 16:24:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.735 16:24:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.735 16:24:22 -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.735 16:24:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.735 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.735 16:24:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6509052 kB' 'MemAvailable: 9438484 kB' 'Buffers: 2684 kB' 'Cached: 3130068 kB' 'SwapCached: 0 kB' 'Active: 497196 kB' 'Inactive: 2753112 kB' 'Active(anon): 128044 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2753112 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119128 kB' 'Mapped: 50784 kB' 'Shmem: 10488 kB' 'KReclaimable: 88232 kB' 'Slab: 190900 kB' 'SReclaimable: 88232 kB' 'SUnreclaim: 102668 kB' 'KernelStack: 6672 kB' 'PageTables: 4340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 308624 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55416 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 206700 kB' 'DirectMap2M: 6084608 kB' 'DirectMap1G: 8388608 kB' 00:04:44.735 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.735 16:24:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.735 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.735 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.735 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.736 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.736 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.737 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.737 16:24:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.737 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.737 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.737 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.737 16:24:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.737 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.737 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.737 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.737 16:24:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.737 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.737 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.737 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.737 16:24:22 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.737 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.737 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.737 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.737 16:24:22 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.737 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.737 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.737 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.737 16:24:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.737 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.737 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.737 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.737 16:24:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.737 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.737 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.737 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.737 16:24:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.737 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.737 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.737 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.737 16:24:22 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.737 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.737 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.737 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.737 16:24:22 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.737 16:24:22 -- setup/common.sh@33 -- # echo 0 00:04:44.737 16:24:22 -- setup/common.sh@33 -- # return 0 00:04:44.737 16:24:22 -- setup/hugepages.sh@99 -- # surp=0 00:04:44.737 16:24:22 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:44.737 16:24:22 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:44.737 16:24:22 -- setup/common.sh@18 -- # local node= 00:04:44.737 16:24:22 -- setup/common.sh@19 -- # local var val 00:04:44.737 16:24:22 -- setup/common.sh@20 -- # local mem_f mem 00:04:44.737 16:24:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.737 16:24:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.737 16:24:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.737 16:24:22 -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.737 16:24:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.737 16:24:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6509052 kB' 'MemAvailable: 9438484 kB' 'Buffers: 2684 kB' 'Cached: 3130068 kB' 'SwapCached: 0 kB' 'Active: 497188 kB' 'Inactive: 2753112 kB' 'Active(anon): 128036 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2753112 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119116 kB' 'Mapped: 50784 kB' 'Shmem: 10488 kB' 'KReclaimable: 88232 kB' 'Slab: 190900 kB' 'SReclaimable: 88232 kB' 'SUnreclaim: 102668 kB' 'KernelStack: 6672 kB' 'PageTables: 4340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 308624 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55416 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 206700 kB' 'DirectMap2M: 6084608 kB' 'DirectMap1G: 8388608 kB' 00:04:44.737 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.737 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.737 16:24:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.737 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.737 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.737 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.737 16:24:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.737 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.737 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.737 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.737 16:24:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.737 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.737 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.737 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.737 16:24:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.737 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.737 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.737 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.737 16:24:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.737 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.737 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.737 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.737 16:24:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.737 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.737 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.737 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.737 16:24:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.737 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.737 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.737 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.737 16:24:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.737 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.737 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.737 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.737 16:24:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.737 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.737 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.737 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.737 16:24:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.737 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.737 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.737 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.737 16:24:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.737 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.737 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.737 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.737 16:24:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.737 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.737 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.737 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.737 16:24:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.737 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.737 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.737 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.737 16:24:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.737 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.737 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.737 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.737 16:24:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.737 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.737 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.737 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.737 16:24:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.737 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.737 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.737 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.737 16:24:22 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.737 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.737 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.737 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.737 16:24:22 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.737 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.737 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.737 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.737 16:24:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.737 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.737 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.737 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.737 16:24:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.737 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.737 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.737 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.737 16:24:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.737 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.737 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.737 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.737 16:24:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.737 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.737 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.737 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.737 16:24:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.737 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.738 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.738 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.738 16:24:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.738 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.738 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.738 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.738 16:24:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.738 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.738 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.738 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.738 16:24:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.738 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.738 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.738 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.738 16:24:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.738 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.738 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.738 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.738 16:24:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.738 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.738 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.738 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.738 16:24:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.738 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.738 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.738 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.738 16:24:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.738 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.738 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.738 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.738 16:24:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.738 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.738 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.738 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.738 16:24:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.738 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.738 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.738 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.738 16:24:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.738 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.738 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.738 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.738 16:24:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.738 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.738 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.738 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.738 16:24:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.738 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.738 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.738 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.738 16:24:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.738 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.738 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.738 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.738 16:24:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.738 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.738 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.738 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.738 16:24:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.738 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.738 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.738 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.738 16:24:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.738 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.738 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.738 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.738 16:24:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.738 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.738 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.738 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.738 16:24:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.738 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.738 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.738 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.738 16:24:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.738 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.738 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.738 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.738 16:24:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.738 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.738 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.738 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.738 16:24:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.738 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.738 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.738 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.738 16:24:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.738 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.738 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.738 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.738 16:24:22 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.738 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.738 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.738 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.738 16:24:22 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.738 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.738 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.738 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.738 16:24:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.738 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.738 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.738 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.738 16:24:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.738 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.738 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.738 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.738 16:24:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.738 16:24:22 -- setup/common.sh@32 -- # continue 00:04:44.738 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.738 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.738 16:24:22 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.738 16:24:22 -- setup/common.sh@33 -- # echo 0 00:04:44.738 16:24:22 -- setup/common.sh@33 -- # return 0 00:04:44.738 16:24:22 -- setup/hugepages.sh@100 -- # resv=0 00:04:44.738 16:24:22 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:44.738 nr_hugepages=1024 00:04:44.738 resv_hugepages=0 00:04:44.738 surplus_hugepages=0 00:04:44.738 anon_hugepages=0 00:04:44.738 16:24:22 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:44.738 16:24:22 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:44.738 16:24:22 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:44.738 16:24:22 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:44.738 16:24:22 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:44.738 16:24:22 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:44.738 16:24:22 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:44.738 16:24:22 -- setup/common.sh@18 -- # local node= 00:04:44.738 16:24:22 -- setup/common.sh@19 -- # local var val 00:04:44.738 16:24:22 -- setup/common.sh@20 -- # local mem_f mem 00:04:44.738 16:24:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.738 16:24:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.738 16:24:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.738 16:24:22 -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.738 16:24:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.738 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.015 16:24:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6509052 kB' 'MemAvailable: 9438484 kB' 'Buffers: 2684 kB' 'Cached: 3130068 kB' 'SwapCached: 0 kB' 'Active: 497288 kB' 'Inactive: 2753112 kB' 'Active(anon): 128136 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2753112 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119216 kB' 'Mapped: 50784 kB' 'Shmem: 10488 kB' 'KReclaimable: 88232 kB' 'Slab: 190896 kB' 'SReclaimable: 88232 kB' 'SUnreclaim: 102664 kB' 'KernelStack: 6672 kB' 'PageTables: 4340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 308624 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55416 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 206700 kB' 'DirectMap2M: 6084608 kB' 'DirectMap1G: 8388608 kB' 00:04:45.015 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.015 16:24:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.015 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.015 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.015 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.015 16:24:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.015 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.015 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.015 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.015 16:24:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.015 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.015 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.015 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.015 16:24:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.015 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.015 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.015 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.015 16:24:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.015 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.015 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.015 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.015 16:24:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.015 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.015 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.015 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.015 16:24:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.015 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.015 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.015 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.016 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.016 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.017 16:24:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.017 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.017 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.017 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.017 16:24:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.017 16:24:22 -- setup/common.sh@33 -- # echo 1024 00:04:45.017 16:24:22 -- setup/common.sh@33 -- # return 0 00:04:45.017 16:24:22 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:45.017 16:24:22 -- setup/hugepages.sh@112 -- # get_nodes 00:04:45.017 16:24:22 -- setup/hugepages.sh@27 -- # local node 00:04:45.017 16:24:22 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:45.017 16:24:22 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:45.017 16:24:22 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:45.017 16:24:22 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:45.017 16:24:22 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:45.017 16:24:22 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:45.017 16:24:22 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:45.017 16:24:22 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:45.017 16:24:22 -- setup/common.sh@18 -- # local node=0 00:04:45.017 16:24:22 -- setup/common.sh@19 -- # local var val 00:04:45.017 16:24:22 -- setup/common.sh@20 -- # local mem_f mem 00:04:45.017 16:24:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.017 16:24:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:45.017 16:24:22 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:45.017 16:24:22 -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.017 16:24:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.017 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.017 16:24:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6508800 kB' 'MemUsed: 5730312 kB' 'SwapCached: 0 kB' 'Active: 497180 kB' 'Inactive: 2753112 kB' 'Active(anon): 128028 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2753112 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 3132752 kB' 'Mapped: 50784 kB' 'AnonPages: 119108 kB' 'Shmem: 10488 kB' 'KernelStack: 6672 kB' 'PageTables: 4340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88232 kB' 'Slab: 190884 kB' 'SReclaimable: 88232 kB' 'SUnreclaim: 102652 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:45.017 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.017 16:24:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.017 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.017 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.017 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.017 16:24:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.017 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.017 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.017 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.017 16:24:22 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.017 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.017 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.017 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.017 16:24:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.017 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.017 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.017 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.017 16:24:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.017 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.017 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.017 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.017 16:24:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.017 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.017 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.017 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.017 16:24:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.017 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.017 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.017 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.017 16:24:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.017 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.017 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.017 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.017 16:24:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.017 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.017 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.017 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.017 16:24:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.017 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.017 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.017 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.017 16:24:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.017 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.017 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.017 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.017 16:24:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.017 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.017 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.017 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.017 16:24:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.017 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.017 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.017 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.017 16:24:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.017 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.017 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.017 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.017 16:24:22 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.017 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.017 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.017 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.017 16:24:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.017 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.017 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.017 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.017 16:24:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.017 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.017 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.017 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.017 16:24:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.017 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.017 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.017 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.017 16:24:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.017 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.017 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.017 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.017 16:24:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.017 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.017 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.017 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.017 16:24:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.017 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.017 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.017 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.017 16:24:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.017 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.017 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.017 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.017 16:24:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.017 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.017 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.017 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.017 16:24:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.017 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.017 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.017 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.017 16:24:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.017 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.017 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.017 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.017 16:24:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.017 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.017 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.017 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.017 16:24:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.017 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.017 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.017 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.017 16:24:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.017 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.017 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.017 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.017 16:24:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.017 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.017 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.017 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.017 16:24:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.017 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.017 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.018 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.018 16:24:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.018 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.018 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.018 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.018 16:24:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.018 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.018 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.018 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.018 16:24:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.018 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.018 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.018 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.018 16:24:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.018 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.018 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.018 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.018 16:24:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.018 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.018 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.018 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.018 16:24:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.018 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.018 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.018 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.018 16:24:22 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.018 16:24:22 -- setup/common.sh@33 -- # echo 0 00:04:45.018 16:24:22 -- setup/common.sh@33 -- # return 0 00:04:45.018 16:24:22 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:45.018 16:24:22 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:45.018 16:24:22 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:45.018 16:24:22 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:45.018 16:24:22 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:45.018 node0=1024 expecting 1024 00:04:45.018 16:24:22 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:45.018 ************************************ 00:04:45.018 END TEST even_2G_alloc 00:04:45.018 ************************************ 00:04:45.018 00:04:45.018 real 0m0.621s 00:04:45.018 user 0m0.286s 00:04:45.018 sys 0m0.337s 00:04:45.018 16:24:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:45.018 16:24:22 -- common/autotest_common.sh@10 -- # set +x 00:04:45.018 16:24:22 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:45.018 16:24:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:45.018 16:24:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:45.018 16:24:22 -- common/autotest_common.sh@10 -- # set +x 00:04:45.018 ************************************ 00:04:45.018 START TEST odd_alloc 00:04:45.018 ************************************ 00:04:45.018 16:24:22 -- common/autotest_common.sh@1114 -- # odd_alloc 00:04:45.018 16:24:22 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:45.018 16:24:22 -- setup/hugepages.sh@49 -- # local size=2098176 00:04:45.018 16:24:22 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:45.018 16:24:22 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:45.018 16:24:22 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:45.018 16:24:22 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:45.018 16:24:22 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:45.018 16:24:22 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:45.018 16:24:22 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:45.018 16:24:22 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:45.018 16:24:22 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:45.018 16:24:22 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:45.018 16:24:22 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:45.018 16:24:22 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:45.018 16:24:22 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:45.018 16:24:22 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:45.018 16:24:22 -- setup/hugepages.sh@83 -- # : 0 00:04:45.018 16:24:22 -- setup/hugepages.sh@84 -- # : 0 00:04:45.018 16:24:22 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:45.018 16:24:22 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:45.018 16:24:22 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:45.018 16:24:22 -- setup/hugepages.sh@160 -- # setup output 00:04:45.018 16:24:22 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:45.018 16:24:22 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:45.289 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:45.289 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:45.289 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:45.289 16:24:22 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:45.289 16:24:22 -- setup/hugepages.sh@89 -- # local node 00:04:45.289 16:24:22 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:45.289 16:24:22 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:45.289 16:24:22 -- setup/hugepages.sh@92 -- # local surp 00:04:45.289 16:24:22 -- setup/hugepages.sh@93 -- # local resv 00:04:45.289 16:24:22 -- setup/hugepages.sh@94 -- # local anon 00:04:45.289 16:24:22 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:45.289 16:24:22 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:45.289 16:24:22 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:45.289 16:24:22 -- setup/common.sh@18 -- # local node= 00:04:45.289 16:24:22 -- setup/common.sh@19 -- # local var val 00:04:45.289 16:24:22 -- setup/common.sh@20 -- # local mem_f mem 00:04:45.289 16:24:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.290 16:24:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.290 16:24:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.290 16:24:22 -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.290 16:24:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.553 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.553 16:24:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6506336 kB' 'MemAvailable: 9435768 kB' 'Buffers: 2684 kB' 'Cached: 3130068 kB' 'SwapCached: 0 kB' 'Active: 497460 kB' 'Inactive: 2753112 kB' 'Active(anon): 128308 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2753112 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119436 kB' 'Mapped: 50912 kB' 'Shmem: 10488 kB' 'KReclaimable: 88232 kB' 'Slab: 190900 kB' 'SReclaimable: 88232 kB' 'SUnreclaim: 102668 kB' 'KernelStack: 6696 kB' 'PageTables: 4308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 317980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55448 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 206700 kB' 'DirectMap2M: 6084608 kB' 'DirectMap1G: 8388608 kB' 00:04:45.553 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.553 16:24:22 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.553 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.553 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.553 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.553 16:24:22 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.553 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.553 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.553 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.553 16:24:22 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.553 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.553 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.553 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.553 16:24:22 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.553 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.553 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.553 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.553 16:24:22 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.553 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.553 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.553 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.553 16:24:22 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.553 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.553 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.553 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.553 16:24:22 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.553 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.553 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.553 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.553 16:24:22 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.553 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.553 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.553 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.553 16:24:22 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.553 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.553 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.553 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.553 16:24:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.553 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.553 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.553 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.553 16:24:22 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.553 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.553 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.553 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.553 16:24:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.553 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.553 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.553 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.553 16:24:22 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.553 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.553 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.553 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.553 16:24:22 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.553 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.553 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.553 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.553 16:24:22 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.553 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.553 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.553 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.553 16:24:22 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.553 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.553 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.553 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.553 16:24:22 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.553 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.553 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.553 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.553 16:24:22 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.553 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.553 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.553 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.553 16:24:22 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.553 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.553 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.553 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.553 16:24:22 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.553 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.553 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.553 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.553 16:24:22 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.553 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.553 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.553 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.553 16:24:22 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.553 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.553 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.553 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.553 16:24:22 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.553 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.553 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.553 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.553 16:24:22 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.553 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.553 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.553 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.553 16:24:22 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.553 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.553 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.553 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.553 16:24:22 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.553 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.553 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.553 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.553 16:24:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.553 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.553 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.553 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.553 16:24:22 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.553 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.553 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.553 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.553 16:24:22 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.553 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.553 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.553 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.553 16:24:22 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.553 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.553 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.553 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.553 16:24:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.553 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.553 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.553 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.553 16:24:22 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.553 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.553 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.553 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.554 16:24:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.554 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.554 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.554 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.554 16:24:22 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.554 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.554 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.554 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.554 16:24:22 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.554 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.554 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.554 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.554 16:24:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.554 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.554 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.554 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.554 16:24:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.554 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.554 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.554 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.554 16:24:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.554 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.554 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.554 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.554 16:24:22 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.554 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.554 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.554 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.554 16:24:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.554 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.554 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.554 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.554 16:24:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.554 16:24:22 -- setup/common.sh@33 -- # echo 0 00:04:45.554 16:24:22 -- setup/common.sh@33 -- # return 0 00:04:45.554 16:24:22 -- setup/hugepages.sh@97 -- # anon=0 00:04:45.554 16:24:22 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:45.554 16:24:22 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:45.554 16:24:22 -- setup/common.sh@18 -- # local node= 00:04:45.554 16:24:22 -- setup/common.sh@19 -- # local var val 00:04:45.554 16:24:22 -- setup/common.sh@20 -- # local mem_f mem 00:04:45.554 16:24:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.554 16:24:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.554 16:24:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.554 16:24:22 -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.554 16:24:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.554 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.554 16:24:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6506336 kB' 'MemAvailable: 9435768 kB' 'Buffers: 2684 kB' 'Cached: 3130068 kB' 'SwapCached: 0 kB' 'Active: 497196 kB' 'Inactive: 2753112 kB' 'Active(anon): 128044 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2753112 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119128 kB' 'Mapped: 50784 kB' 'Shmem: 10488 kB' 'KReclaimable: 88232 kB' 'Slab: 190904 kB' 'SReclaimable: 88232 kB' 'SUnreclaim: 102672 kB' 'KernelStack: 6688 kB' 'PageTables: 4360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 317980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55416 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 206700 kB' 'DirectMap2M: 6084608 kB' 'DirectMap1G: 8388608 kB' 00:04:45.554 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.554 16:24:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.554 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.554 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.554 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.554 16:24:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.554 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.554 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.554 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.554 16:24:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.554 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.554 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.554 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.554 16:24:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.554 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.554 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.554 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.554 16:24:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.554 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.554 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.554 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.554 16:24:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.554 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.554 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.554 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.554 16:24:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.554 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.554 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.554 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.554 16:24:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.554 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.554 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.554 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.554 16:24:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.554 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.554 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.554 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.554 16:24:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.554 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.554 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.554 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.554 16:24:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.554 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.554 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.554 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.554 16:24:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.554 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.554 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.554 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.554 16:24:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.554 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.554 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.554 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.554 16:24:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.554 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.554 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.554 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.554 16:24:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.554 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.554 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.554 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.554 16:24:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.554 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.554 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.554 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.554 16:24:22 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.554 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.554 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.554 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.554 16:24:22 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.554 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.554 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.554 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.554 16:24:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.554 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.554 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.554 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.554 16:24:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.554 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.554 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.554 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.554 16:24:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.554 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.554 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.554 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.554 16:24:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.554 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.554 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.554 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.554 16:24:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.554 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.554 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.554 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.554 16:24:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.554 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.554 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.554 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.555 16:24:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.555 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.555 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.555 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.555 16:24:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.555 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.555 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.555 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.555 16:24:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.555 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.555 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.555 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.555 16:24:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.555 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.555 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.555 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.555 16:24:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.555 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.555 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.555 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.555 16:24:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.555 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.555 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.555 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.555 16:24:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.555 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.555 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.555 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.555 16:24:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.555 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.555 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.555 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.555 16:24:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.555 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.555 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.555 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.555 16:24:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.555 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.555 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.555 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.555 16:24:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.555 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.555 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.555 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.555 16:24:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.555 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.555 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.555 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.555 16:24:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.555 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.555 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.555 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.555 16:24:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.555 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.555 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.555 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.555 16:24:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.555 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.555 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.555 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.555 16:24:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.555 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.555 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.555 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.555 16:24:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.555 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.555 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.555 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.555 16:24:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.555 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.555 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.555 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.555 16:24:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.555 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.555 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.555 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.555 16:24:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.555 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.555 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.555 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.555 16:24:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.555 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.555 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.555 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.555 16:24:22 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.555 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.555 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.555 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.555 16:24:22 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.555 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.555 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.555 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.555 16:24:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.555 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.555 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.555 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.555 16:24:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.555 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.555 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.555 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.555 16:24:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.555 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.555 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.555 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.555 16:24:22 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.555 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.555 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.555 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.555 16:24:22 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.555 16:24:22 -- setup/common.sh@33 -- # echo 0 00:04:45.555 16:24:22 -- setup/common.sh@33 -- # return 0 00:04:45.555 16:24:22 -- setup/hugepages.sh@99 -- # surp=0 00:04:45.555 16:24:22 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:45.555 16:24:22 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:45.555 16:24:22 -- setup/common.sh@18 -- # local node= 00:04:45.555 16:24:22 -- setup/common.sh@19 -- # local var val 00:04:45.555 16:24:22 -- setup/common.sh@20 -- # local mem_f mem 00:04:45.555 16:24:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.555 16:24:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.555 16:24:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.555 16:24:22 -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.555 16:24:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.555 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.555 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.555 16:24:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6506336 kB' 'MemAvailable: 9435768 kB' 'Buffers: 2684 kB' 'Cached: 3130068 kB' 'SwapCached: 0 kB' 'Active: 497212 kB' 'Inactive: 2753112 kB' 'Active(anon): 128060 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2753112 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119144 kB' 'Mapped: 50784 kB' 'Shmem: 10488 kB' 'KReclaimable: 88232 kB' 'Slab: 190904 kB' 'SReclaimable: 88232 kB' 'SUnreclaim: 102672 kB' 'KernelStack: 6688 kB' 'PageTables: 4360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 317980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55416 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 206700 kB' 'DirectMap2M: 6084608 kB' 'DirectMap1G: 8388608 kB' 00:04:45.555 16:24:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.555 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.555 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.555 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.555 16:24:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.555 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.555 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.555 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.555 16:24:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.555 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.555 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.555 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.555 16:24:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.555 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.555 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.556 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.556 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.557 16:24:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.557 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.557 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.557 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.557 16:24:22 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.557 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.557 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.557 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.557 16:24:22 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.557 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.557 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.557 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.557 16:24:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.557 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.557 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.557 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.557 16:24:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.557 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.557 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.557 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.557 16:24:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.557 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.557 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.557 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.557 16:24:22 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.557 16:24:22 -- setup/common.sh@33 -- # echo 0 00:04:45.557 16:24:22 -- setup/common.sh@33 -- # return 0 00:04:45.557 nr_hugepages=1025 00:04:45.557 16:24:22 -- setup/hugepages.sh@100 -- # resv=0 00:04:45.557 16:24:22 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:45.557 resv_hugepages=0 00:04:45.557 surplus_hugepages=0 00:04:45.557 anon_hugepages=0 00:04:45.557 16:24:22 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:45.557 16:24:22 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:45.557 16:24:22 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:45.557 16:24:22 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:45.557 16:24:22 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:45.557 16:24:22 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:45.557 16:24:22 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:45.557 16:24:22 -- setup/common.sh@18 -- # local node= 00:04:45.557 16:24:22 -- setup/common.sh@19 -- # local var val 00:04:45.557 16:24:22 -- setup/common.sh@20 -- # local mem_f mem 00:04:45.557 16:24:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.557 16:24:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.557 16:24:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.557 16:24:22 -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.557 16:24:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.557 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.557 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.557 16:24:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6506336 kB' 'MemAvailable: 9435768 kB' 'Buffers: 2684 kB' 'Cached: 3130068 kB' 'SwapCached: 0 kB' 'Active: 497060 kB' 'Inactive: 2753112 kB' 'Active(anon): 127908 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2753112 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119000 kB' 'Mapped: 50784 kB' 'Shmem: 10488 kB' 'KReclaimable: 88232 kB' 'Slab: 190904 kB' 'SReclaimable: 88232 kB' 'SUnreclaim: 102672 kB' 'KernelStack: 6688 kB' 'PageTables: 4360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 317980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55416 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 206700 kB' 'DirectMap2M: 6084608 kB' 'DirectMap1G: 8388608 kB' 00:04:45.557 16:24:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.557 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.557 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.557 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.557 16:24:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.557 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.557 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.557 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.557 16:24:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.557 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.557 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.557 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.557 16:24:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.557 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.557 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.557 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.557 16:24:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.557 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.557 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.557 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.557 16:24:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.557 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.557 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.557 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.557 16:24:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.557 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.557 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.557 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.557 16:24:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.557 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.557 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.557 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.557 16:24:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.557 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.557 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.557 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.557 16:24:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.557 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.557 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.557 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.557 16:24:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.557 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.557 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.557 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.557 16:24:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.557 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.557 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.557 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.557 16:24:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.557 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.557 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.557 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.557 16:24:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.557 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.557 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.557 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.557 16:24:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.557 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.557 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.557 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.557 16:24:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.557 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.557 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.557 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.557 16:24:22 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.557 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.557 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.557 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.557 16:24:22 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.557 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.558 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.558 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.558 16:24:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.558 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.558 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.558 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.558 16:24:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.558 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.558 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.558 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.558 16:24:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.558 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.558 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.558 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.558 16:24:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.558 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.558 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.558 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.558 16:24:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.558 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.558 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.558 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.558 16:24:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.558 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.558 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.558 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.558 16:24:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.558 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.558 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.558 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.558 16:24:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.558 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.558 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.558 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.558 16:24:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.558 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.558 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.558 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.558 16:24:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.558 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.558 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.558 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.558 16:24:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.558 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.558 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.558 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.558 16:24:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.558 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.558 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.558 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.558 16:24:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.558 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.558 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.558 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.558 16:24:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.558 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.558 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.558 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.558 16:24:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.558 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.558 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.558 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.558 16:24:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.558 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.558 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.558 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.558 16:24:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.558 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.558 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.558 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.558 16:24:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.558 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.558 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.558 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.558 16:24:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.558 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.558 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.558 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.558 16:24:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.558 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.558 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.558 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.558 16:24:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.558 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.558 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.558 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.558 16:24:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.558 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.558 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.558 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.558 16:24:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.558 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.558 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.558 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.558 16:24:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.558 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.558 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.558 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.558 16:24:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.558 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.558 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.558 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.558 16:24:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.558 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.558 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.558 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.558 16:24:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.558 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.558 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.558 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.558 16:24:22 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.558 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.558 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.558 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.558 16:24:22 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.558 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.558 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.558 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.558 16:24:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.558 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.558 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.558 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.558 16:24:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.558 16:24:22 -- setup/common.sh@33 -- # echo 1025 00:04:45.558 16:24:22 -- setup/common.sh@33 -- # return 0 00:04:45.558 16:24:22 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:45.558 16:24:22 -- setup/hugepages.sh@112 -- # get_nodes 00:04:45.558 16:24:22 -- setup/hugepages.sh@27 -- # local node 00:04:45.558 16:24:22 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:45.558 16:24:22 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:45.558 16:24:22 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:45.558 16:24:22 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:45.558 16:24:22 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:45.558 16:24:22 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:45.558 16:24:22 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:45.558 16:24:22 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:45.558 16:24:22 -- setup/common.sh@18 -- # local node=0 00:04:45.558 16:24:22 -- setup/common.sh@19 -- # local var val 00:04:45.558 16:24:22 -- setup/common.sh@20 -- # local mem_f mem 00:04:45.558 16:24:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.558 16:24:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:45.558 16:24:22 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:45.558 16:24:22 -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.558 16:24:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.558 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.558 16:24:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6506852 kB' 'MemUsed: 5732260 kB' 'SwapCached: 0 kB' 'Active: 497048 kB' 'Inactive: 2753112 kB' 'Active(anon): 127896 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2753112 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 3132752 kB' 'Mapped: 50784 kB' 'AnonPages: 118988 kB' 'Shmem: 10488 kB' 'KernelStack: 6656 kB' 'PageTables: 4264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88232 kB' 'Slab: 190884 kB' 'SReclaimable: 88232 kB' 'SUnreclaim: 102652 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:45.558 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.558 16:24:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.558 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.559 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.559 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.559 16:24:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.559 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.559 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.559 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.559 16:24:22 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.559 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.559 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.559 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.559 16:24:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.559 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.559 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.559 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.559 16:24:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.559 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.559 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.559 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.559 16:24:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.559 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.559 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.559 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.559 16:24:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.559 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.559 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.559 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.559 16:24:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.559 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.559 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.559 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.559 16:24:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.559 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.559 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.559 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.559 16:24:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.559 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.559 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.559 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.559 16:24:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.559 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.559 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.559 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.559 16:24:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.559 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.559 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.559 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.559 16:24:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.559 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.559 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.559 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.559 16:24:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.559 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.559 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.559 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.559 16:24:22 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.559 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.559 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.559 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.559 16:24:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.559 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.559 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.559 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.559 16:24:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.559 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.559 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.559 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.559 16:24:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.559 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.559 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.559 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.559 16:24:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.559 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.559 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.559 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.559 16:24:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.559 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.559 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.559 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.559 16:24:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.559 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.559 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.559 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.559 16:24:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.559 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.559 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.559 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.559 16:24:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.559 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.559 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.559 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.559 16:24:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.559 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.559 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.559 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.559 16:24:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.559 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.559 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.559 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.559 16:24:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.559 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.559 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.559 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.559 16:24:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.559 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.559 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.559 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.559 16:24:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.559 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.559 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.559 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.559 16:24:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.559 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.559 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.559 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.559 16:24:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.559 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.559 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.559 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.559 16:24:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.559 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.559 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.559 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.559 16:24:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.559 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.559 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.559 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.559 16:24:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.559 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.559 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.559 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.559 16:24:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.559 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.559 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.559 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.559 16:24:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.559 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.559 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.559 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.559 16:24:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.559 16:24:22 -- setup/common.sh@32 -- # continue 00:04:45.559 16:24:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.559 16:24:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.559 16:24:22 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.559 16:24:22 -- setup/common.sh@33 -- # echo 0 00:04:45.559 16:24:22 -- setup/common.sh@33 -- # return 0 00:04:45.559 node0=1025 expecting 1025 00:04:45.559 ************************************ 00:04:45.559 END TEST odd_alloc 00:04:45.559 ************************************ 00:04:45.559 16:24:22 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:45.559 16:24:22 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:45.559 16:24:22 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:45.559 16:24:22 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:45.559 16:24:22 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:45.559 16:24:22 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:45.559 00:04:45.559 real 0m0.615s 00:04:45.559 user 0m0.281s 00:04:45.559 sys 0m0.341s 00:04:45.559 16:24:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:45.559 16:24:22 -- common/autotest_common.sh@10 -- # set +x 00:04:45.559 16:24:23 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:45.559 16:24:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:45.559 16:24:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:45.560 16:24:23 -- common/autotest_common.sh@10 -- # set +x 00:04:45.560 ************************************ 00:04:45.560 START TEST custom_alloc 00:04:45.560 ************************************ 00:04:45.560 16:24:23 -- common/autotest_common.sh@1114 -- # custom_alloc 00:04:45.560 16:24:23 -- setup/hugepages.sh@167 -- # local IFS=, 00:04:45.560 16:24:23 -- setup/hugepages.sh@169 -- # local node 00:04:45.560 16:24:23 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:45.560 16:24:23 -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:45.560 16:24:23 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:45.560 16:24:23 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:45.560 16:24:23 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:45.560 16:24:23 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:45.560 16:24:23 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:45.560 16:24:23 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:45.560 16:24:23 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:45.560 16:24:23 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:45.560 16:24:23 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:45.560 16:24:23 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:45.560 16:24:23 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:45.560 16:24:23 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:45.560 16:24:23 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:45.560 16:24:23 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:45.560 16:24:23 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:45.560 16:24:23 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:45.560 16:24:23 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:45.560 16:24:23 -- setup/hugepages.sh@83 -- # : 0 00:04:45.560 16:24:23 -- setup/hugepages.sh@84 -- # : 0 00:04:45.560 16:24:23 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:45.560 16:24:23 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:45.560 16:24:23 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:45.560 16:24:23 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:45.560 16:24:23 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:45.560 16:24:23 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:45.560 16:24:23 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:45.560 16:24:23 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:45.560 16:24:23 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:45.560 16:24:23 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:45.560 16:24:23 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:45.560 16:24:23 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:45.560 16:24:23 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:45.560 16:24:23 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:45.560 16:24:23 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:45.560 16:24:23 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:45.560 16:24:23 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:45.560 16:24:23 -- setup/hugepages.sh@78 -- # return 0 00:04:45.560 16:24:23 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:45.560 16:24:23 -- setup/hugepages.sh@187 -- # setup output 00:04:45.560 16:24:23 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:45.560 16:24:23 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:46.132 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:46.132 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:46.132 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:46.132 16:24:23 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:46.132 16:24:23 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:46.132 16:24:23 -- setup/hugepages.sh@89 -- # local node 00:04:46.132 16:24:23 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:46.132 16:24:23 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:46.132 16:24:23 -- setup/hugepages.sh@92 -- # local surp 00:04:46.132 16:24:23 -- setup/hugepages.sh@93 -- # local resv 00:04:46.132 16:24:23 -- setup/hugepages.sh@94 -- # local anon 00:04:46.132 16:24:23 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:46.132 16:24:23 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:46.132 16:24:23 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:46.132 16:24:23 -- setup/common.sh@18 -- # local node= 00:04:46.132 16:24:23 -- setup/common.sh@19 -- # local var val 00:04:46.132 16:24:23 -- setup/common.sh@20 -- # local mem_f mem 00:04:46.132 16:24:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.132 16:24:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.132 16:24:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.132 16:24:23 -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.132 16:24:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.132 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.132 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.133 16:24:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7553964 kB' 'MemAvailable: 10483396 kB' 'Buffers: 2684 kB' 'Cached: 3130068 kB' 'SwapCached: 0 kB' 'Active: 497260 kB' 'Inactive: 2753112 kB' 'Active(anon): 128108 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2753112 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119192 kB' 'Mapped: 50880 kB' 'Shmem: 10488 kB' 'KReclaimable: 88232 kB' 'Slab: 190892 kB' 'SReclaimable: 88232 kB' 'SUnreclaim: 102660 kB' 'KernelStack: 6664 kB' 'PageTables: 4356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 317980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55448 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 206700 kB' 'DirectMap2M: 6084608 kB' 'DirectMap1G: 8388608 kB' 00:04:46.133 16:24:23 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.133 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.133 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.133 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.133 16:24:23 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.133 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.133 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.133 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.133 16:24:23 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.133 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.133 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.133 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.133 16:24:23 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.133 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.133 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.133 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.133 16:24:23 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.133 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.133 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.133 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.133 16:24:23 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.133 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.133 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.133 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.133 16:24:23 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.133 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.133 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.133 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.133 16:24:23 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.133 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.133 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.133 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.133 16:24:23 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.133 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.133 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.133 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.133 16:24:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.133 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.133 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.133 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.133 16:24:23 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.133 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.133 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.133 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.133 16:24:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.133 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.133 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.133 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.133 16:24:23 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.133 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.133 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.133 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.133 16:24:23 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.133 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.133 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.133 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.133 16:24:23 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.133 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.133 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.133 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.133 16:24:23 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.133 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.133 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.133 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.133 16:24:23 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.133 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.133 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.133 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.133 16:24:23 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.133 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.133 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.133 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.133 16:24:23 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.133 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.133 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.133 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.133 16:24:23 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.133 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.133 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.133 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.133 16:24:23 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.133 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.133 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.133 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.133 16:24:23 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.133 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.133 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.133 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.133 16:24:23 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.133 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.133 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.133 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.133 16:24:23 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.133 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.133 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.133 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.133 16:24:23 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.133 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.133 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.133 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.133 16:24:23 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.133 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.133 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.133 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.133 16:24:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.133 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.133 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.133 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.133 16:24:23 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.133 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.133 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.133 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.133 16:24:23 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.133 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.133 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.133 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.133 16:24:23 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.133 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.133 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.133 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.133 16:24:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.133 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.133 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.133 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.133 16:24:23 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.133 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.133 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.133 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.133 16:24:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.133 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.133 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.133 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.133 16:24:23 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.133 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.133 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.133 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.133 16:24:23 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.133 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.133 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.133 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.133 16:24:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.133 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.133 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.133 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.133 16:24:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.133 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.133 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.133 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.133 16:24:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.133 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.133 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.133 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.133 16:24:23 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.133 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.133 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.133 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.134 16:24:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.134 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.134 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.134 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.134 16:24:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.134 16:24:23 -- setup/common.sh@33 -- # echo 0 00:04:46.134 16:24:23 -- setup/common.sh@33 -- # return 0 00:04:46.134 16:24:23 -- setup/hugepages.sh@97 -- # anon=0 00:04:46.134 16:24:23 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:46.134 16:24:23 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:46.134 16:24:23 -- setup/common.sh@18 -- # local node= 00:04:46.134 16:24:23 -- setup/common.sh@19 -- # local var val 00:04:46.134 16:24:23 -- setup/common.sh@20 -- # local mem_f mem 00:04:46.134 16:24:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.134 16:24:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.134 16:24:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.134 16:24:23 -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.134 16:24:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.134 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.134 16:24:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7554312 kB' 'MemAvailable: 10483744 kB' 'Buffers: 2684 kB' 'Cached: 3130068 kB' 'SwapCached: 0 kB' 'Active: 497476 kB' 'Inactive: 2753112 kB' 'Active(anon): 128324 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2753112 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119408 kB' 'Mapped: 50788 kB' 'Shmem: 10488 kB' 'KReclaimable: 88232 kB' 'Slab: 190908 kB' 'SReclaimable: 88232 kB' 'SUnreclaim: 102676 kB' 'KernelStack: 6688 kB' 'PageTables: 4360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 317980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55432 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 206700 kB' 'DirectMap2M: 6084608 kB' 'DirectMap1G: 8388608 kB' 00:04:46.134 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.134 16:24:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.134 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.134 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.134 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.134 16:24:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.134 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.134 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.134 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.134 16:24:23 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.134 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.134 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.134 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.134 16:24:23 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.134 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.134 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.134 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.134 16:24:23 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.134 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.134 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.134 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.134 16:24:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.134 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.134 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.134 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.134 16:24:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.134 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.134 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.134 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.134 16:24:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.134 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.134 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.134 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.134 16:24:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.134 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.134 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.134 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.134 16:24:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.134 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.134 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.134 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.134 16:24:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.134 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.134 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.134 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.134 16:24:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.134 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.134 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.134 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.134 16:24:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.134 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.134 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.134 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.134 16:24:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.134 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.134 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.134 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.134 16:24:23 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.134 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.134 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.134 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.134 16:24:23 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.134 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.134 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.134 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.134 16:24:23 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.134 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.134 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.134 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.134 16:24:23 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.134 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.134 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.134 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.134 16:24:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.134 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.134 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.134 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.134 16:24:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.134 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.134 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.134 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.134 16:24:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.134 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.134 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.134 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.134 16:24:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.134 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.134 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.134 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.134 16:24:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.134 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.134 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.134 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.134 16:24:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.134 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.134 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.134 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.134 16:24:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.134 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.134 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.134 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.134 16:24:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.134 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.134 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.134 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.134 16:24:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.134 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.134 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.134 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.134 16:24:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.134 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.134 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.135 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.135 16:24:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.135 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.135 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.135 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.135 16:24:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.135 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.135 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.135 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.135 16:24:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.135 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.135 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.135 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.135 16:24:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.135 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.135 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.135 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.135 16:24:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.135 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.135 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.135 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.135 16:24:23 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.135 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.135 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.135 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.135 16:24:23 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.135 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.135 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.135 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.135 16:24:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.135 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.135 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.135 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.135 16:24:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.135 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.135 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.135 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.135 16:24:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.135 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.135 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.135 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.135 16:24:23 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.135 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.135 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.135 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.135 16:24:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.135 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.135 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.135 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.135 16:24:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.135 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.135 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.135 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.135 16:24:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.135 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.135 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.135 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.135 16:24:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.135 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.135 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.135 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.135 16:24:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.135 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.135 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.135 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.135 16:24:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.135 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.135 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.135 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.135 16:24:23 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.135 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.135 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.135 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.135 16:24:23 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.135 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.135 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.135 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.135 16:24:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.135 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.135 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.135 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.135 16:24:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.135 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.135 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.135 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.135 16:24:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.135 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.135 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.135 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.135 16:24:23 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.135 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.135 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.135 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.135 16:24:23 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.135 16:24:23 -- setup/common.sh@33 -- # echo 0 00:04:46.135 16:24:23 -- setup/common.sh@33 -- # return 0 00:04:46.135 16:24:23 -- setup/hugepages.sh@99 -- # surp=0 00:04:46.135 16:24:23 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:46.135 16:24:23 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:46.135 16:24:23 -- setup/common.sh@18 -- # local node= 00:04:46.135 16:24:23 -- setup/common.sh@19 -- # local var val 00:04:46.135 16:24:23 -- setup/common.sh@20 -- # local mem_f mem 00:04:46.135 16:24:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.135 16:24:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.135 16:24:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.135 16:24:23 -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.135 16:24:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.135 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.135 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.136 16:24:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7554568 kB' 'MemAvailable: 10484000 kB' 'Buffers: 2684 kB' 'Cached: 3130068 kB' 'SwapCached: 0 kB' 'Active: 497240 kB' 'Inactive: 2753112 kB' 'Active(anon): 128088 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2753112 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119164 kB' 'Mapped: 50788 kB' 'Shmem: 10488 kB' 'KReclaimable: 88232 kB' 'Slab: 190908 kB' 'SReclaimable: 88232 kB' 'SUnreclaim: 102676 kB' 'KernelStack: 6688 kB' 'PageTables: 4360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 318120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55432 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 206700 kB' 'DirectMap2M: 6084608 kB' 'DirectMap1G: 8388608 kB' 00:04:46.136 16:24:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.136 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.136 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.136 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.136 16:24:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.136 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.136 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.136 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.136 16:24:23 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.136 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.136 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.136 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.136 16:24:23 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.136 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.136 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.136 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.136 16:24:23 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.136 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.136 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.136 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.136 16:24:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.136 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.136 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.136 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.136 16:24:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.136 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.136 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.136 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.136 16:24:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.136 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.136 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.136 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.136 16:24:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.136 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.136 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.136 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.136 16:24:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.136 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.136 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.136 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.136 16:24:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.136 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.136 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.136 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.136 16:24:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.136 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.136 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.136 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.136 16:24:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.136 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.136 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.136 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.136 16:24:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.136 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.136 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.136 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.136 16:24:23 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.136 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.136 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.136 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.136 16:24:23 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.136 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.136 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.136 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.136 16:24:23 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.136 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.136 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.136 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.136 16:24:23 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.136 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.136 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.136 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.136 16:24:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.136 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.136 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.136 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.136 16:24:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.136 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.136 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.136 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.136 16:24:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.136 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.136 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.136 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.136 16:24:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.136 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.136 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.136 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.136 16:24:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.136 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.136 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.136 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.136 16:24:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.136 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.136 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.136 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.136 16:24:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.136 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.136 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.136 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.136 16:24:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.136 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.136 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.136 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.136 16:24:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.136 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.136 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.136 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.136 16:24:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.136 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.136 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.136 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.136 16:24:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.136 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.136 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.136 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.136 16:24:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.136 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.136 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.136 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.136 16:24:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.136 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.136 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.136 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.136 16:24:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.136 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.136 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.137 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.137 16:24:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.137 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.137 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.137 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.137 16:24:23 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.137 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.137 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.137 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.137 16:24:23 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.137 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.137 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.137 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.137 16:24:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.137 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.137 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.137 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.137 16:24:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.137 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.137 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.137 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.137 16:24:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.137 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.137 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.137 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.137 16:24:23 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.137 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.137 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.137 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.137 16:24:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.137 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.137 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.137 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.137 16:24:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.137 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.137 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.137 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.137 16:24:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.137 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.137 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.137 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.137 16:24:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.137 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.137 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.137 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.137 16:24:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.137 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.137 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.137 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.137 16:24:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.137 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.137 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.137 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.137 16:24:23 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.137 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.137 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.137 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.137 16:24:23 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.137 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.137 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.137 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.137 16:24:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.137 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.137 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.137 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.137 16:24:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.137 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.137 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.137 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.137 16:24:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.137 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.137 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.137 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.137 16:24:23 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.137 16:24:23 -- setup/common.sh@33 -- # echo 0 00:04:46.137 16:24:23 -- setup/common.sh@33 -- # return 0 00:04:46.137 nr_hugepages=512 00:04:46.137 16:24:23 -- setup/hugepages.sh@100 -- # resv=0 00:04:46.137 16:24:23 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:46.137 resv_hugepages=0 00:04:46.137 16:24:23 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:46.137 surplus_hugepages=0 00:04:46.137 anon_hugepages=0 00:04:46.137 16:24:23 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:46.137 16:24:23 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:46.137 16:24:23 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:46.137 16:24:23 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:46.137 16:24:23 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:46.137 16:24:23 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:46.137 16:24:23 -- setup/common.sh@18 -- # local node= 00:04:46.137 16:24:23 -- setup/common.sh@19 -- # local var val 00:04:46.137 16:24:23 -- setup/common.sh@20 -- # local mem_f mem 00:04:46.137 16:24:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.137 16:24:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.137 16:24:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.137 16:24:23 -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.137 16:24:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.137 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.137 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.137 16:24:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7554568 kB' 'MemAvailable: 10484000 kB' 'Buffers: 2684 kB' 'Cached: 3130068 kB' 'SwapCached: 0 kB' 'Active: 497176 kB' 'Inactive: 2753112 kB' 'Active(anon): 128024 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2753112 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119156 kB' 'Mapped: 50788 kB' 'Shmem: 10488 kB' 'KReclaimable: 88232 kB' 'Slab: 190908 kB' 'SReclaimable: 88232 kB' 'SUnreclaim: 102676 kB' 'KernelStack: 6688 kB' 'PageTables: 4352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 317980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55384 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 206700 kB' 'DirectMap2M: 6084608 kB' 'DirectMap1G: 8388608 kB' 00:04:46.137 16:24:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.137 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.137 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.137 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.137 16:24:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.137 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.138 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.138 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.138 16:24:23 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.138 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.138 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.138 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.138 16:24:23 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.138 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.138 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.138 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.138 16:24:23 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.138 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.138 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.138 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.138 16:24:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.138 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.138 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.138 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.138 16:24:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.138 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.138 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.138 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.138 16:24:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.138 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.138 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.138 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.138 16:24:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.138 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.138 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.138 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.138 16:24:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.138 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.138 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.138 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.138 16:24:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.138 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.138 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.138 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.138 16:24:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.138 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.138 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.138 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.138 16:24:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.138 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.138 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.138 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.138 16:24:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.138 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.138 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.138 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.138 16:24:23 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.138 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.138 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.138 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.138 16:24:23 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.138 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.138 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.138 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.138 16:24:23 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.138 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.138 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.138 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.138 16:24:23 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.138 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.138 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.138 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.138 16:24:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.138 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.138 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.138 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.138 16:24:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.138 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.138 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.138 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.138 16:24:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.138 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.138 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.138 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.138 16:24:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.138 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.138 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.138 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.138 16:24:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.138 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.138 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.138 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.138 16:24:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.138 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.138 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.138 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.138 16:24:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.139 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.139 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.139 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.139 16:24:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.139 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.139 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.139 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.139 16:24:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.139 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.139 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.139 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.139 16:24:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.139 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.139 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.139 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.139 16:24:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.139 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.139 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.139 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.399 16:24:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.399 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.399 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.399 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.399 16:24:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.399 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.399 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.399 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.399 16:24:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.399 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.399 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.399 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.399 16:24:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.399 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.399 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.399 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.399 16:24:23 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.399 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.399 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.399 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.399 16:24:23 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.399 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.399 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.399 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.399 16:24:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.399 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.399 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.399 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.399 16:24:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.399 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.399 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.399 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.399 16:24:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.399 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.399 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.399 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.399 16:24:23 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.399 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.399 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.399 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.399 16:24:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.399 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.399 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.399 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.399 16:24:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.399 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.399 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.399 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.399 16:24:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.399 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.399 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.399 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.399 16:24:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.399 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.399 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.399 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.399 16:24:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.399 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.399 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.399 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.399 16:24:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.399 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.399 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.399 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.399 16:24:23 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.399 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.399 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.399 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.399 16:24:23 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.399 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.399 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.399 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.399 16:24:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.399 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.399 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.399 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.399 16:24:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.399 16:24:23 -- setup/common.sh@33 -- # echo 512 00:04:46.399 16:24:23 -- setup/common.sh@33 -- # return 0 00:04:46.399 16:24:23 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:46.399 16:24:23 -- setup/hugepages.sh@112 -- # get_nodes 00:04:46.399 16:24:23 -- setup/hugepages.sh@27 -- # local node 00:04:46.399 16:24:23 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:46.399 16:24:23 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:46.399 16:24:23 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:46.399 16:24:23 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:46.399 16:24:23 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:46.399 16:24:23 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:46.399 16:24:23 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:46.399 16:24:23 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:46.399 16:24:23 -- setup/common.sh@18 -- # local node=0 00:04:46.399 16:24:23 -- setup/common.sh@19 -- # local var val 00:04:46.399 16:24:23 -- setup/common.sh@20 -- # local mem_f mem 00:04:46.400 16:24:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.400 16:24:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:46.400 16:24:23 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:46.400 16:24:23 -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.400 16:24:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.400 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.400 16:24:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7554568 kB' 'MemUsed: 4684544 kB' 'SwapCached: 0 kB' 'Active: 497432 kB' 'Inactive: 2753112 kB' 'Active(anon): 128280 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2753112 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 3132752 kB' 'Mapped: 50788 kB' 'AnonPages: 119412 kB' 'Shmem: 10488 kB' 'KernelStack: 6688 kB' 'PageTables: 4352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88232 kB' 'Slab: 190908 kB' 'SReclaimable: 88232 kB' 'SUnreclaim: 102676 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:46.400 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.400 16:24:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.400 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.400 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.400 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.400 16:24:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.400 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.400 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.400 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.400 16:24:23 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.400 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.400 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.400 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.400 16:24:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.400 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.400 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.400 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.400 16:24:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.400 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.400 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.400 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.400 16:24:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.400 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.400 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.400 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.400 16:24:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.400 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.400 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.400 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.400 16:24:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.400 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.400 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.400 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.400 16:24:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.400 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.400 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.400 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.400 16:24:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.400 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.400 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.400 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.400 16:24:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.400 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.400 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.400 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.400 16:24:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.400 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.400 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.400 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.400 16:24:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.400 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.400 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.400 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.400 16:24:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.400 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.400 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.400 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.400 16:24:23 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.400 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.400 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.400 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.400 16:24:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.400 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.400 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.400 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.400 16:24:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.400 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.400 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.400 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.400 16:24:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.400 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.400 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.400 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.400 16:24:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.400 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.400 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.400 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.400 16:24:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.400 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.400 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.400 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.400 16:24:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.400 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.400 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.400 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.400 16:24:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.400 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.400 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.400 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.400 16:24:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.400 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.400 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.400 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.400 16:24:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.400 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.400 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.400 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.400 16:24:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.400 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.400 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.400 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.400 16:24:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.400 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.400 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.400 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.400 16:24:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.400 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.400 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.400 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.401 16:24:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.401 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.401 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.401 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.401 16:24:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.401 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.401 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.401 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.401 16:24:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.401 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.401 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.401 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.401 16:24:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.401 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.401 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.401 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.401 16:24:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.401 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.401 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.401 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.401 16:24:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.401 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.401 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.401 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.401 16:24:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.401 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.401 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.401 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.401 16:24:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.401 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.401 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.401 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.401 16:24:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.401 16:24:23 -- setup/common.sh@32 -- # continue 00:04:46.401 16:24:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.401 16:24:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.401 16:24:23 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.401 16:24:23 -- setup/common.sh@33 -- # echo 0 00:04:46.401 16:24:23 -- setup/common.sh@33 -- # return 0 00:04:46.401 16:24:23 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:46.401 16:24:23 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:46.401 node0=512 expecting 512 00:04:46.401 16:24:23 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:46.401 16:24:23 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:46.401 16:24:23 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:46.401 16:24:23 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:46.401 00:04:46.401 real 0m0.646s 00:04:46.401 user 0m0.291s 00:04:46.401 sys 0m0.361s 00:04:46.401 16:24:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:46.401 ************************************ 00:04:46.401 END TEST custom_alloc 00:04:46.401 16:24:23 -- common/autotest_common.sh@10 -- # set +x 00:04:46.401 ************************************ 00:04:46.401 16:24:23 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:46.401 16:24:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:46.401 16:24:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:46.401 16:24:23 -- common/autotest_common.sh@10 -- # set +x 00:04:46.401 ************************************ 00:04:46.401 START TEST no_shrink_alloc 00:04:46.401 ************************************ 00:04:46.401 16:24:23 -- common/autotest_common.sh@1114 -- # no_shrink_alloc 00:04:46.401 16:24:23 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:46.401 16:24:23 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:46.401 16:24:23 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:46.401 16:24:23 -- setup/hugepages.sh@51 -- # shift 00:04:46.401 16:24:23 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:46.401 16:24:23 -- setup/hugepages.sh@52 -- # local node_ids 00:04:46.401 16:24:23 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:46.401 16:24:23 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:46.401 16:24:23 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:46.401 16:24:23 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:46.401 16:24:23 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:46.401 16:24:23 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:46.401 16:24:23 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:46.401 16:24:23 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:46.401 16:24:23 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:46.401 16:24:23 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:46.401 16:24:23 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:46.401 16:24:23 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:46.401 16:24:23 -- setup/hugepages.sh@73 -- # return 0 00:04:46.401 16:24:23 -- setup/hugepages.sh@198 -- # setup output 00:04:46.401 16:24:23 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:46.401 16:24:23 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:46.661 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:46.661 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:46.661 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:46.924 16:24:24 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:46.924 16:24:24 -- setup/hugepages.sh@89 -- # local node 00:04:46.924 16:24:24 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:46.924 16:24:24 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:46.924 16:24:24 -- setup/hugepages.sh@92 -- # local surp 00:04:46.924 16:24:24 -- setup/hugepages.sh@93 -- # local resv 00:04:46.924 16:24:24 -- setup/hugepages.sh@94 -- # local anon 00:04:46.924 16:24:24 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:46.924 16:24:24 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:46.924 16:24:24 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:46.924 16:24:24 -- setup/common.sh@18 -- # local node= 00:04:46.924 16:24:24 -- setup/common.sh@19 -- # local var val 00:04:46.924 16:24:24 -- setup/common.sh@20 -- # local mem_f mem 00:04:46.924 16:24:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.924 16:24:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.924 16:24:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.924 16:24:24 -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.924 16:24:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.924 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.924 16:24:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6507948 kB' 'MemAvailable: 9437384 kB' 'Buffers: 2684 kB' 'Cached: 3130072 kB' 'SwapCached: 0 kB' 'Active: 495208 kB' 'Inactive: 2753116 kB' 'Active(anon): 126056 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2753116 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117140 kB' 'Mapped: 50068 kB' 'Shmem: 10488 kB' 'KReclaimable: 88228 kB' 'Slab: 190692 kB' 'SReclaimable: 88228 kB' 'SUnreclaim: 102464 kB' 'KernelStack: 6584 kB' 'PageTables: 3768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 304068 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55336 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 206700 kB' 'DirectMap2M: 6084608 kB' 'DirectMap1G: 8388608 kB' 00:04:46.924 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.924 16:24:24 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.924 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.924 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.924 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.924 16:24:24 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.924 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.924 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.924 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.924 16:24:24 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.924 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.924 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.924 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.924 16:24:24 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.924 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.924 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.924 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.924 16:24:24 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.924 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.924 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.924 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.924 16:24:24 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.924 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.924 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.924 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.924 16:24:24 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.924 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.924 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.924 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.924 16:24:24 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.924 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.924 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.924 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.924 16:24:24 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.924 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.924 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.924 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.924 16:24:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.924 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.924 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.924 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.924 16:24:24 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.924 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.924 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.924 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.924 16:24:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.924 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.924 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.924 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.924 16:24:24 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.924 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.924 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.924 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.924 16:24:24 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.924 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.924 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.924 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.924 16:24:24 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.924 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.924 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.924 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.924 16:24:24 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.924 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.924 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.924 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.924 16:24:24 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.924 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.924 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.924 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.924 16:24:24 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.924 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.924 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.924 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.924 16:24:24 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.924 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.924 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.924 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.924 16:24:24 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.924 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.924 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.924 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.924 16:24:24 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.924 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.924 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.924 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.924 16:24:24 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.924 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.924 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.924 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.924 16:24:24 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.924 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.924 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.924 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.924 16:24:24 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.924 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.924 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.924 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.924 16:24:24 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.924 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.924 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.924 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.924 16:24:24 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.924 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.924 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.924 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.924 16:24:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.924 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.924 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.924 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.924 16:24:24 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.925 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.925 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.925 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.925 16:24:24 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.925 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.925 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.925 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.925 16:24:24 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.925 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.925 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.925 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.925 16:24:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.925 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.925 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.925 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.925 16:24:24 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.925 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.925 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.925 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.925 16:24:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.925 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.925 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.925 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.925 16:24:24 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.925 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.925 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.925 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.925 16:24:24 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.925 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.925 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.925 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.925 16:24:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.925 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.925 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.925 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.925 16:24:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.925 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.925 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.925 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.925 16:24:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.925 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.925 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.925 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.925 16:24:24 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.925 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.925 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.925 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.925 16:24:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.925 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.925 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.925 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.925 16:24:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.925 16:24:24 -- setup/common.sh@33 -- # echo 0 00:04:46.925 16:24:24 -- setup/common.sh@33 -- # return 0 00:04:46.925 16:24:24 -- setup/hugepages.sh@97 -- # anon=0 00:04:46.925 16:24:24 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:46.925 16:24:24 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:46.925 16:24:24 -- setup/common.sh@18 -- # local node= 00:04:46.925 16:24:24 -- setup/common.sh@19 -- # local var val 00:04:46.925 16:24:24 -- setup/common.sh@20 -- # local mem_f mem 00:04:46.925 16:24:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.925 16:24:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.925 16:24:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.925 16:24:24 -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.925 16:24:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.925 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.925 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.925 16:24:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6507948 kB' 'MemAvailable: 9437384 kB' 'Buffers: 2684 kB' 'Cached: 3130072 kB' 'SwapCached: 0 kB' 'Active: 494976 kB' 'Inactive: 2753116 kB' 'Active(anon): 125824 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2753116 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 116960 kB' 'Mapped: 50068 kB' 'Shmem: 10488 kB' 'KReclaimable: 88228 kB' 'Slab: 190692 kB' 'SReclaimable: 88228 kB' 'SUnreclaim: 102464 kB' 'KernelStack: 6584 kB' 'PageTables: 3768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 304068 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55336 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 206700 kB' 'DirectMap2M: 6084608 kB' 'DirectMap1G: 8388608 kB' 00:04:46.925 16:24:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.925 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.925 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.925 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.925 16:24:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.925 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.925 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.925 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.925 16:24:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.925 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.925 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.925 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.925 16:24:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.925 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.925 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.925 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.925 16:24:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.925 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.925 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.925 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.925 16:24:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.925 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.925 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.925 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.925 16:24:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.925 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.925 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.925 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.925 16:24:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.925 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.925 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.925 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.925 16:24:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.925 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.925 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.925 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.925 16:24:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.925 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.925 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.925 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.925 16:24:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.925 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.925 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.925 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.925 16:24:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.925 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.925 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.925 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.925 16:24:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.925 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.925 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.925 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.925 16:24:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.925 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.925 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.925 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.925 16:24:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.925 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.925 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.925 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.925 16:24:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.925 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.925 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.925 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.925 16:24:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.925 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.925 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.925 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.925 16:24:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.925 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.925 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.925 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.925 16:24:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.925 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.926 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.926 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.926 16:24:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.926 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.926 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.926 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.926 16:24:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.926 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.926 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.926 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.926 16:24:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.926 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.926 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.926 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.926 16:24:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.926 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.926 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.926 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.926 16:24:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.926 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.926 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.926 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.926 16:24:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.926 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.926 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.926 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.926 16:24:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.926 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.926 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.926 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.926 16:24:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.926 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.926 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.926 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.926 16:24:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.926 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.926 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.926 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.926 16:24:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.926 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.926 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.926 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.926 16:24:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.926 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.926 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.926 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.926 16:24:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.926 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.926 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.926 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.926 16:24:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.926 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.926 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.926 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.926 16:24:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.926 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.926 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.926 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.926 16:24:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.926 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.926 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.926 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.926 16:24:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.926 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.926 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.926 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.926 16:24:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.926 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.926 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.926 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.926 16:24:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.926 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.926 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.926 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.926 16:24:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.926 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.926 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.926 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.926 16:24:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.926 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.926 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.926 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.926 16:24:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.926 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.926 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.926 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.926 16:24:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.926 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.926 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.926 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.926 16:24:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.926 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.926 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.926 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.926 16:24:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.926 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.926 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.926 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.926 16:24:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.926 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.926 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.926 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.926 16:24:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.926 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.926 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.926 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.926 16:24:24 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.926 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.926 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.926 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.926 16:24:24 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.926 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.926 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.926 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.926 16:24:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.926 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.926 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.926 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.926 16:24:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.926 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.926 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.926 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.926 16:24:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.926 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.926 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.926 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.926 16:24:24 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.926 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.926 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.926 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.926 16:24:24 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.926 16:24:24 -- setup/common.sh@33 -- # echo 0 00:04:46.926 16:24:24 -- setup/common.sh@33 -- # return 0 00:04:46.926 16:24:24 -- setup/hugepages.sh@99 -- # surp=0 00:04:46.926 16:24:24 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:46.926 16:24:24 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:46.926 16:24:24 -- setup/common.sh@18 -- # local node= 00:04:46.926 16:24:24 -- setup/common.sh@19 -- # local var val 00:04:46.926 16:24:24 -- setup/common.sh@20 -- # local mem_f mem 00:04:46.926 16:24:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.926 16:24:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.926 16:24:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.926 16:24:24 -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.926 16:24:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.926 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.927 16:24:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6508200 kB' 'MemAvailable: 9437636 kB' 'Buffers: 2684 kB' 'Cached: 3130072 kB' 'SwapCached: 0 kB' 'Active: 495148 kB' 'Inactive: 2753116 kB' 'Active(anon): 125996 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2753116 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117124 kB' 'Mapped: 49940 kB' 'Shmem: 10488 kB' 'KReclaimable: 88228 kB' 'Slab: 190696 kB' 'SReclaimable: 88228 kB' 'SUnreclaim: 102468 kB' 'KernelStack: 6608 kB' 'PageTables: 3948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 304068 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55320 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 206700 kB' 'DirectMap2M: 6084608 kB' 'DirectMap1G: 8388608 kB' 00:04:46.927 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.927 16:24:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.927 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.927 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.927 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.927 16:24:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.927 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.927 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.927 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.927 16:24:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.927 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.927 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.927 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.927 16:24:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.927 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.927 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.927 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.927 16:24:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.927 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.927 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.927 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.927 16:24:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.927 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.927 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.927 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.927 16:24:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.927 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.927 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.927 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.927 16:24:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.927 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.927 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.927 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.927 16:24:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.927 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.927 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.927 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.927 16:24:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.927 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.927 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.927 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.927 16:24:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.927 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.927 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.927 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.927 16:24:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.927 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.927 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.927 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.927 16:24:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.927 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.927 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.927 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.927 16:24:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.927 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.927 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.927 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.927 16:24:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.927 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.927 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.927 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.927 16:24:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.927 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.927 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.927 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.927 16:24:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.927 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.927 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.927 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.927 16:24:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.927 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.927 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.927 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.927 16:24:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.927 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.927 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.927 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.927 16:24:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.927 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.927 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.927 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.927 16:24:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.927 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.927 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.927 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.927 16:24:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.927 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.927 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.927 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.927 16:24:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.927 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.927 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.927 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.927 16:24:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.927 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.927 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.927 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.927 16:24:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.927 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.927 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.927 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.927 16:24:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.927 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.927 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.927 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.927 16:24:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.927 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.927 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.927 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.927 16:24:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.927 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.927 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.927 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.927 16:24:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.927 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.927 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.927 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.927 16:24:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.927 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.927 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.927 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.927 16:24:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.927 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.927 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.927 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.927 16:24:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.927 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.927 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.927 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.927 16:24:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.927 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.927 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.927 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.927 16:24:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.927 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.927 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.927 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.927 16:24:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.927 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.927 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.927 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.927 16:24:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.927 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.927 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.927 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.927 16:24:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.927 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.927 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.927 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.927 16:24:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.927 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.928 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.928 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.928 16:24:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.928 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.928 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.928 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.928 16:24:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.928 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.928 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.928 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.928 16:24:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.928 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.928 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.928 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.928 16:24:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.928 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.928 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.928 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.928 16:24:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.928 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.928 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.928 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.928 16:24:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.928 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.928 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.928 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.928 16:24:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.928 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.928 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.928 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.928 16:24:24 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.928 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.928 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.928 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.928 16:24:24 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.928 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.928 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.928 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.928 16:24:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.928 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.928 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.928 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.928 16:24:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.928 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.928 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.928 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.928 16:24:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.928 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.928 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.928 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.928 16:24:24 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.928 16:24:24 -- setup/common.sh@33 -- # echo 0 00:04:46.928 16:24:24 -- setup/common.sh@33 -- # return 0 00:04:46.928 16:24:24 -- setup/hugepages.sh@100 -- # resv=0 00:04:46.928 16:24:24 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:46.928 nr_hugepages=1024 00:04:46.928 resv_hugepages=0 00:04:46.928 surplus_hugepages=0 00:04:46.928 anon_hugepages=0 00:04:46.928 16:24:24 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:46.928 16:24:24 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:46.928 16:24:24 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:46.928 16:24:24 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:46.928 16:24:24 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:46.928 16:24:24 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:46.928 16:24:24 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:46.928 16:24:24 -- setup/common.sh@18 -- # local node= 00:04:46.928 16:24:24 -- setup/common.sh@19 -- # local var val 00:04:46.928 16:24:24 -- setup/common.sh@20 -- # local mem_f mem 00:04:46.928 16:24:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.928 16:24:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.928 16:24:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.928 16:24:24 -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.928 16:24:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.928 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.928 16:24:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6508200 kB' 'MemAvailable: 9437636 kB' 'Buffers: 2684 kB' 'Cached: 3130072 kB' 'SwapCached: 0 kB' 'Active: 495152 kB' 'Inactive: 2753116 kB' 'Active(anon): 126000 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2753116 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117128 kB' 'Mapped: 49940 kB' 'Shmem: 10488 kB' 'KReclaimable: 88228 kB' 'Slab: 190696 kB' 'SReclaimable: 88228 kB' 'SUnreclaim: 102468 kB' 'KernelStack: 6608 kB' 'PageTables: 3948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 304068 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55320 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 206700 kB' 'DirectMap2M: 6084608 kB' 'DirectMap1G: 8388608 kB' 00:04:46.928 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.928 16:24:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.928 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.928 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.928 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.928 16:24:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.928 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.928 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.928 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.928 16:24:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.928 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.928 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.928 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.928 16:24:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.928 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.928 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.928 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.928 16:24:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.928 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.928 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.928 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.928 16:24:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.928 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.928 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.928 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.928 16:24:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.928 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.928 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.928 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.928 16:24:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.928 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.928 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.928 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.928 16:24:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.928 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.928 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.928 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.928 16:24:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.928 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.928 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.928 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.929 16:24:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.929 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.929 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.929 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.929 16:24:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.929 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.929 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.929 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.929 16:24:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.929 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.929 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.929 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.929 16:24:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.929 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.929 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.929 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.929 16:24:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.929 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.929 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.929 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.929 16:24:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.929 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.929 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.929 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.929 16:24:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.929 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.929 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.929 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.929 16:24:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.929 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.929 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.929 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.929 16:24:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.929 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.929 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.929 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.929 16:24:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.929 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.929 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.929 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.929 16:24:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.929 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.929 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.929 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.929 16:24:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.929 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.929 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.929 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.929 16:24:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.929 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.929 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.929 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.929 16:24:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.929 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.929 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.929 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.929 16:24:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.929 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.929 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.929 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.929 16:24:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.929 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.929 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.929 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.929 16:24:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.929 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.929 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.929 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.929 16:24:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.929 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.929 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.929 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.929 16:24:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.929 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.929 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.929 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.929 16:24:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.929 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.929 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.929 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.929 16:24:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.929 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.929 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.929 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.929 16:24:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.929 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.929 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.929 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.929 16:24:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.929 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.929 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.929 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.929 16:24:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.929 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.929 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.929 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.929 16:24:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.929 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.929 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.929 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.929 16:24:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.929 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.929 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.929 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.929 16:24:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.929 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.929 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.929 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.929 16:24:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.929 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.929 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.929 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.929 16:24:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.929 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.929 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.929 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.929 16:24:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.929 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.929 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.929 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.929 16:24:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.929 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.929 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.929 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.929 16:24:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.929 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.929 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.929 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.929 16:24:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.929 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.929 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.929 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.929 16:24:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.929 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.929 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.929 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.929 16:24:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.929 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.929 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.929 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.929 16:24:24 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.929 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.929 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.929 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.929 16:24:24 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.929 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.929 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.929 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.929 16:24:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.929 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.929 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.929 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.929 16:24:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.929 16:24:24 -- setup/common.sh@33 -- # echo 1024 00:04:46.929 16:24:24 -- setup/common.sh@33 -- # return 0 00:04:46.929 16:24:24 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:46.929 16:24:24 -- setup/hugepages.sh@112 -- # get_nodes 00:04:46.929 16:24:24 -- setup/hugepages.sh@27 -- # local node 00:04:46.929 16:24:24 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:46.930 16:24:24 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:46.930 16:24:24 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:46.930 16:24:24 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:46.930 16:24:24 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:46.930 16:24:24 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:46.930 16:24:24 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:46.930 16:24:24 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:46.930 16:24:24 -- setup/common.sh@18 -- # local node=0 00:04:46.930 16:24:24 -- setup/common.sh@19 -- # local var val 00:04:46.930 16:24:24 -- setup/common.sh@20 -- # local mem_f mem 00:04:46.930 16:24:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.930 16:24:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:46.930 16:24:24 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:46.930 16:24:24 -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.930 16:24:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.930 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.930 16:24:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6508200 kB' 'MemUsed: 5730912 kB' 'SwapCached: 0 kB' 'Active: 495324 kB' 'Inactive: 2753116 kB' 'Active(anon): 126172 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2753116 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 3132756 kB' 'Mapped: 49940 kB' 'AnonPages: 117252 kB' 'Shmem: 10488 kB' 'KernelStack: 6592 kB' 'PageTables: 3900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88228 kB' 'Slab: 190696 kB' 'SReclaimable: 88228 kB' 'SUnreclaim: 102468 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:46.930 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.930 16:24:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.930 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.930 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.930 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.930 16:24:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.930 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.930 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.930 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.930 16:24:24 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.930 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.930 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.930 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.930 16:24:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.930 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.930 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.930 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.930 16:24:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.930 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.930 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.930 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.930 16:24:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.930 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.930 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.930 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.930 16:24:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.930 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.930 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.930 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.930 16:24:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.930 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.930 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.930 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.930 16:24:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.930 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.930 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.930 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.930 16:24:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.930 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.930 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.930 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.930 16:24:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.930 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.930 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.930 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.930 16:24:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.930 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.930 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.930 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.930 16:24:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.930 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.930 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.930 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.930 16:24:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.930 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.930 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.930 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.930 16:24:24 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.930 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.930 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.930 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.930 16:24:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.930 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.930 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.930 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.930 16:24:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.930 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.930 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.930 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.930 16:24:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.930 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.930 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.930 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.930 16:24:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.930 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.930 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.930 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.930 16:24:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.930 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.930 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.930 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.930 16:24:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.930 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.930 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.930 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.930 16:24:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.930 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.930 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.930 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.930 16:24:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.930 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.930 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.930 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.930 16:24:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.930 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.930 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.930 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.930 16:24:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.930 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.930 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.930 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.930 16:24:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.930 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.930 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.930 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.930 16:24:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.930 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.930 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.930 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.930 16:24:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.930 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.930 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.930 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.930 16:24:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.930 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.930 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.930 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.930 16:24:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.930 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.930 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.930 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.930 16:24:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.930 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.930 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.930 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.930 16:24:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.930 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.930 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.930 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.931 16:24:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.931 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.931 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.931 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.931 16:24:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.931 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.931 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.931 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.931 16:24:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.931 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.931 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.931 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.931 16:24:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.931 16:24:24 -- setup/common.sh@32 -- # continue 00:04:46.931 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.931 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.931 16:24:24 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.931 16:24:24 -- setup/common.sh@33 -- # echo 0 00:04:46.931 16:24:24 -- setup/common.sh@33 -- # return 0 00:04:46.931 16:24:24 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:46.931 16:24:24 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:46.931 node0=1024 expecting 1024 00:04:46.931 16:24:24 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:46.931 16:24:24 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:46.931 16:24:24 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:46.931 16:24:24 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:46.931 16:24:24 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:46.931 16:24:24 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:46.931 16:24:24 -- setup/hugepages.sh@202 -- # setup output 00:04:46.931 16:24:24 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:46.931 16:24:24 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:47.500 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:47.500 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:47.501 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:47.501 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:47.501 16:24:24 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:47.501 16:24:24 -- setup/hugepages.sh@89 -- # local node 00:04:47.501 16:24:24 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:47.501 16:24:24 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:47.501 16:24:24 -- setup/hugepages.sh@92 -- # local surp 00:04:47.501 16:24:24 -- setup/hugepages.sh@93 -- # local resv 00:04:47.501 16:24:24 -- setup/hugepages.sh@94 -- # local anon 00:04:47.501 16:24:24 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:47.501 16:24:24 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:47.501 16:24:24 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:47.501 16:24:24 -- setup/common.sh@18 -- # local node= 00:04:47.501 16:24:24 -- setup/common.sh@19 -- # local var val 00:04:47.501 16:24:24 -- setup/common.sh@20 -- # local mem_f mem 00:04:47.501 16:24:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.501 16:24:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.501 16:24:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.501 16:24:24 -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.501 16:24:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.501 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.501 16:24:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6511600 kB' 'MemAvailable: 9441036 kB' 'Buffers: 2684 kB' 'Cached: 3130072 kB' 'SwapCached: 0 kB' 'Active: 496004 kB' 'Inactive: 2753116 kB' 'Active(anon): 126852 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2753116 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117512 kB' 'Mapped: 50064 kB' 'Shmem: 10488 kB' 'KReclaimable: 88228 kB' 'Slab: 190692 kB' 'SReclaimable: 88228 kB' 'SUnreclaim: 102464 kB' 'KernelStack: 6632 kB' 'PageTables: 3916 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 304068 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55352 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 206700 kB' 'DirectMap2M: 6084608 kB' 'DirectMap1G: 8388608 kB' 00:04:47.501 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.501 16:24:24 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.501 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.501 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.501 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.501 16:24:24 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.501 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.501 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.501 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.501 16:24:24 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.501 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.501 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.501 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.501 16:24:24 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.501 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.501 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.501 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.501 16:24:24 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.501 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.501 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.501 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.501 16:24:24 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.501 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.501 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.501 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.501 16:24:24 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.501 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.501 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.501 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.501 16:24:24 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.501 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.501 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.501 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.501 16:24:24 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.501 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.501 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.501 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.501 16:24:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.501 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.501 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.501 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.501 16:24:24 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.501 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.501 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.501 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.501 16:24:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.501 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.501 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.501 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.501 16:24:24 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.501 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.501 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.501 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.501 16:24:24 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.501 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.501 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.501 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.501 16:24:24 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.501 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.501 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.501 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.501 16:24:24 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.501 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.501 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.501 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.501 16:24:24 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.501 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.501 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.501 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.501 16:24:24 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.501 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.501 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.501 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.501 16:24:24 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.501 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.501 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.501 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.501 16:24:24 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.501 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.501 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.501 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.501 16:24:24 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.501 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.501 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.501 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.501 16:24:24 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.501 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.501 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.501 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.501 16:24:24 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.501 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.501 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.501 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.501 16:24:24 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.501 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.501 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.501 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.501 16:24:24 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.501 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.501 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.501 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.501 16:24:24 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.501 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.501 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.501 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.501 16:24:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.501 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.501 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.501 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.501 16:24:24 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.501 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.501 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.501 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.501 16:24:24 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.501 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.501 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.502 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.502 16:24:24 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.502 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.502 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.502 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.502 16:24:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.502 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.502 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.502 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.502 16:24:24 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.502 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.502 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.502 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.502 16:24:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.502 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.502 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.502 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.502 16:24:24 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.502 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.502 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.502 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.502 16:24:24 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.502 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.502 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.502 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.502 16:24:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.502 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.502 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.502 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.502 16:24:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.502 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.502 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.502 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.502 16:24:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.502 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.502 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.502 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.502 16:24:24 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.502 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.502 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.502 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.502 16:24:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.502 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.502 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.502 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.502 16:24:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.502 16:24:24 -- setup/common.sh@33 -- # echo 0 00:04:47.502 16:24:24 -- setup/common.sh@33 -- # return 0 00:04:47.502 16:24:24 -- setup/hugepages.sh@97 -- # anon=0 00:04:47.502 16:24:24 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:47.502 16:24:24 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:47.502 16:24:24 -- setup/common.sh@18 -- # local node= 00:04:47.502 16:24:24 -- setup/common.sh@19 -- # local var val 00:04:47.502 16:24:24 -- setup/common.sh@20 -- # local mem_f mem 00:04:47.502 16:24:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.502 16:24:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.502 16:24:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.502 16:24:24 -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.502 16:24:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.502 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.502 16:24:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6511348 kB' 'MemAvailable: 9440784 kB' 'Buffers: 2684 kB' 'Cached: 3130072 kB' 'SwapCached: 0 kB' 'Active: 495136 kB' 'Inactive: 2753116 kB' 'Active(anon): 125984 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2753116 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117120 kB' 'Mapped: 49940 kB' 'Shmem: 10488 kB' 'KReclaimable: 88228 kB' 'Slab: 190692 kB' 'SReclaimable: 88228 kB' 'SUnreclaim: 102464 kB' 'KernelStack: 6608 kB' 'PageTables: 3956 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 304068 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55304 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 206700 kB' 'DirectMap2M: 6084608 kB' 'DirectMap1G: 8388608 kB' 00:04:47.502 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.502 16:24:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.502 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.502 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.502 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.502 16:24:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.502 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.502 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.502 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.502 16:24:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.502 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.502 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.502 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.502 16:24:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.502 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.502 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.502 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.502 16:24:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.502 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.502 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.502 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.502 16:24:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.502 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.502 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.502 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.502 16:24:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.502 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.502 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.502 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.502 16:24:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.502 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.502 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.502 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.502 16:24:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.502 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.502 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.502 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.502 16:24:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.502 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.502 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.502 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.502 16:24:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.502 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.502 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.502 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.502 16:24:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.502 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.502 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.502 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.502 16:24:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.502 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.502 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.502 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.502 16:24:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.503 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.503 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.503 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.503 16:24:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.503 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.503 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.503 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.503 16:24:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.503 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.503 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.503 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.503 16:24:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.503 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.503 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.503 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.503 16:24:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.503 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.503 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.503 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.503 16:24:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.503 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.503 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.503 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.503 16:24:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.503 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.503 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.503 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.503 16:24:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.503 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.503 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.503 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.503 16:24:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.503 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.503 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.503 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.503 16:24:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.503 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.503 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.503 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.503 16:24:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.503 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.503 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.503 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.503 16:24:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.503 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.503 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.503 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.503 16:24:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.503 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.503 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.503 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.503 16:24:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.503 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.503 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.503 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.503 16:24:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.503 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.503 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.503 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.503 16:24:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.503 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.503 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.503 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.503 16:24:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.503 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.503 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.503 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.503 16:24:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.503 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.503 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.503 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.503 16:24:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.503 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.503 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.503 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.503 16:24:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.503 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.503 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.503 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.503 16:24:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.503 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.503 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.503 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.503 16:24:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.503 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.503 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.503 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.503 16:24:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.503 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.503 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.503 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.503 16:24:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.503 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.503 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.503 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.503 16:24:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.503 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.503 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.503 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.503 16:24:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.503 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.503 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.503 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.503 16:24:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.503 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.503 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.503 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.503 16:24:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.503 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.503 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.503 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.503 16:24:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.503 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.503 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.503 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.503 16:24:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.503 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.503 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.503 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.503 16:24:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.503 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.503 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.503 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.503 16:24:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.503 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.503 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.503 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.503 16:24:24 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.503 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.503 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.503 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.503 16:24:24 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.503 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.503 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.503 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.503 16:24:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.503 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.503 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.503 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.503 16:24:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.503 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.503 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.503 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.503 16:24:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.503 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.503 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.503 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.503 16:24:24 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.503 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.503 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.503 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.503 16:24:24 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.503 16:24:24 -- setup/common.sh@33 -- # echo 0 00:04:47.503 16:24:24 -- setup/common.sh@33 -- # return 0 00:04:47.503 16:24:24 -- setup/hugepages.sh@99 -- # surp=0 00:04:47.503 16:24:24 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:47.503 16:24:24 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:47.503 16:24:24 -- setup/common.sh@18 -- # local node= 00:04:47.503 16:24:24 -- setup/common.sh@19 -- # local var val 00:04:47.503 16:24:24 -- setup/common.sh@20 -- # local mem_f mem 00:04:47.503 16:24:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.504 16:24:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.504 16:24:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.504 16:24:24 -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.504 16:24:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.504 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.504 16:24:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6511348 kB' 'MemAvailable: 9440784 kB' 'Buffers: 2684 kB' 'Cached: 3130072 kB' 'SwapCached: 0 kB' 'Active: 494884 kB' 'Inactive: 2753116 kB' 'Active(anon): 125732 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2753116 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117164 kB' 'Mapped: 49940 kB' 'Shmem: 10488 kB' 'KReclaimable: 88228 kB' 'Slab: 190692 kB' 'SReclaimable: 88228 kB' 'SUnreclaim: 102464 kB' 'KernelStack: 6608 kB' 'PageTables: 3956 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 304068 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55288 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 206700 kB' 'DirectMap2M: 6084608 kB' 'DirectMap1G: 8388608 kB' 00:04:47.504 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.504 16:24:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.504 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.504 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.504 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.504 16:24:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.504 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.504 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.504 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.504 16:24:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.504 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.504 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.504 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.504 16:24:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.504 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.504 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.504 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.504 16:24:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.504 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.504 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.504 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.504 16:24:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.504 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.504 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.504 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.504 16:24:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.504 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.504 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.504 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.504 16:24:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.504 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.504 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.504 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.504 16:24:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.504 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.504 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.504 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.504 16:24:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.504 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.504 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.504 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.504 16:24:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.504 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.504 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.504 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.504 16:24:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.504 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.504 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.504 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.504 16:24:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.504 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.504 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.504 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.504 16:24:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.504 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.504 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.504 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.504 16:24:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.504 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.504 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.504 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.504 16:24:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.504 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.504 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.504 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.504 16:24:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.504 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.504 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.504 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.504 16:24:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.504 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.504 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.504 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.504 16:24:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.504 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.504 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.504 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.504 16:24:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.504 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.504 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.504 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.504 16:24:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.504 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.504 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.504 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.504 16:24:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.504 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.504 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.504 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.504 16:24:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.504 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.504 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.504 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.504 16:24:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.504 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.504 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.504 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.504 16:24:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.504 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.504 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.504 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.504 16:24:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.504 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.504 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.504 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.504 16:24:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.504 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.504 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.504 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.504 16:24:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.504 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.504 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.504 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.504 16:24:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.504 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.504 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.504 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.504 16:24:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.504 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.504 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.504 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.504 16:24:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.504 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.504 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.504 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.504 16:24:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.504 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.504 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.504 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.504 16:24:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.504 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.504 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.504 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.504 16:24:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.504 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.504 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.504 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.504 16:24:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.505 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.505 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.505 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.505 16:24:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.505 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.505 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.505 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.505 16:24:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.505 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.505 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.505 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.505 16:24:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.505 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.505 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.505 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.505 16:24:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.505 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.505 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.505 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.505 16:24:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.505 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.505 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.505 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.505 16:24:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.505 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.505 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.505 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.505 16:24:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.505 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.505 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.505 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.505 16:24:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.505 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.505 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.505 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.505 16:24:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.505 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.505 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.505 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.505 16:24:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.505 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.505 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.505 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.505 16:24:24 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.505 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.505 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.505 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.505 16:24:24 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.505 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.505 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.505 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.505 16:24:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.505 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.505 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.505 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.505 16:24:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.505 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.505 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.505 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.505 16:24:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.505 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.505 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.505 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.505 16:24:24 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.505 16:24:24 -- setup/common.sh@33 -- # echo 0 00:04:47.505 16:24:24 -- setup/common.sh@33 -- # return 0 00:04:47.505 nr_hugepages=1024 00:04:47.505 resv_hugepages=0 00:04:47.505 surplus_hugepages=0 00:04:47.505 anon_hugepages=0 00:04:47.505 16:24:24 -- setup/hugepages.sh@100 -- # resv=0 00:04:47.505 16:24:24 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:47.505 16:24:24 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:47.505 16:24:24 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:47.505 16:24:24 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:47.505 16:24:24 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:47.505 16:24:24 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:47.505 16:24:24 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:47.505 16:24:24 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:47.505 16:24:24 -- setup/common.sh@18 -- # local node= 00:04:47.505 16:24:24 -- setup/common.sh@19 -- # local var val 00:04:47.505 16:24:24 -- setup/common.sh@20 -- # local mem_f mem 00:04:47.505 16:24:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.505 16:24:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.505 16:24:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.505 16:24:24 -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.505 16:24:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.505 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.505 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.505 16:24:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6511348 kB' 'MemAvailable: 9440784 kB' 'Buffers: 2684 kB' 'Cached: 3130072 kB' 'SwapCached: 0 kB' 'Active: 495268 kB' 'Inactive: 2753116 kB' 'Active(anon): 126116 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2753116 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117200 kB' 'Mapped: 49940 kB' 'Shmem: 10488 kB' 'KReclaimable: 88228 kB' 'Slab: 190692 kB' 'SReclaimable: 88228 kB' 'SUnreclaim: 102464 kB' 'KernelStack: 6592 kB' 'PageTables: 3908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 304068 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55288 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 206700 kB' 'DirectMap2M: 6084608 kB' 'DirectMap1G: 8388608 kB' 00:04:47.505 16:24:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.505 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.505 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.505 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.505 16:24:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.505 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.505 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.505 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.505 16:24:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.505 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.505 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.505 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.505 16:24:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.505 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.505 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.505 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.505 16:24:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.505 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.505 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.505 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.505 16:24:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.505 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.505 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.505 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.505 16:24:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.505 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.505 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.505 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.505 16:24:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.505 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.505 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.505 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.505 16:24:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.506 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.506 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.506 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.506 16:24:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.506 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.506 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.506 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.506 16:24:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.506 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.506 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.506 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.506 16:24:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.506 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.506 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.506 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.506 16:24:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.506 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.506 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.506 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.506 16:24:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.506 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.506 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.506 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.506 16:24:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.506 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.506 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.506 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.506 16:24:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.506 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.506 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.506 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.506 16:24:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.506 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.506 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.506 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.506 16:24:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.506 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.506 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.506 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.506 16:24:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.506 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.506 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.506 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.506 16:24:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.506 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.506 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.506 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.506 16:24:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.506 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.506 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.506 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.506 16:24:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.506 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.506 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.506 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.506 16:24:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.506 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.506 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.506 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.506 16:24:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.506 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.506 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.506 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.506 16:24:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.506 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.506 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.506 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.506 16:24:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.506 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.506 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.506 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.506 16:24:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.506 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.506 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.506 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.506 16:24:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.506 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.506 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.506 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.506 16:24:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.506 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.506 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.506 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.506 16:24:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.506 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.506 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.506 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.506 16:24:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.506 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.506 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.506 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.506 16:24:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.506 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.506 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.506 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.506 16:24:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.506 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.506 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.506 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.506 16:24:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.506 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.506 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.506 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.506 16:24:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.506 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.506 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.506 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.506 16:24:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.506 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.506 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.506 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.506 16:24:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.506 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.506 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.506 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.506 16:24:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.506 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.506 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.506 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.506 16:24:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.506 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.506 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.506 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.506 16:24:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.506 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.506 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.506 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.506 16:24:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.506 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.506 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.506 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.506 16:24:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.506 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.506 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.506 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.506 16:24:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.506 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.506 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.506 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.506 16:24:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.506 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.506 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.506 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.506 16:24:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.506 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.506 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.506 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.506 16:24:24 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.506 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.506 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.506 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.506 16:24:24 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.506 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.506 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.506 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.507 16:24:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.507 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.507 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.507 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.507 16:24:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.507 16:24:24 -- setup/common.sh@33 -- # echo 1024 00:04:47.507 16:24:24 -- setup/common.sh@33 -- # return 0 00:04:47.507 16:24:24 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:47.507 16:24:24 -- setup/hugepages.sh@112 -- # get_nodes 00:04:47.507 16:24:24 -- setup/hugepages.sh@27 -- # local node 00:04:47.507 16:24:24 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:47.507 16:24:24 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:47.507 16:24:24 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:47.507 16:24:24 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:47.507 16:24:24 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:47.507 16:24:24 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:47.507 16:24:24 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:47.507 16:24:24 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:47.507 16:24:24 -- setup/common.sh@18 -- # local node=0 00:04:47.507 16:24:24 -- setup/common.sh@19 -- # local var val 00:04:47.507 16:24:24 -- setup/common.sh@20 -- # local mem_f mem 00:04:47.507 16:24:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.507 16:24:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:47.507 16:24:24 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:47.507 16:24:24 -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.507 16:24:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.507 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.507 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.507 16:24:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6511348 kB' 'MemUsed: 5727764 kB' 'SwapCached: 0 kB' 'Active: 495328 kB' 'Inactive: 2753116 kB' 'Active(anon): 126176 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2753116 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 3132756 kB' 'Mapped: 49940 kB' 'AnonPages: 117236 kB' 'Shmem: 10488 kB' 'KernelStack: 6624 kB' 'PageTables: 4008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88228 kB' 'Slab: 190692 kB' 'SReclaimable: 88228 kB' 'SUnreclaim: 102464 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:47.507 16:24:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.507 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.507 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.507 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.507 16:24:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.507 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.507 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.507 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.507 16:24:24 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.507 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.507 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.507 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.507 16:24:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.507 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.507 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.507 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.507 16:24:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.507 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.507 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.507 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.507 16:24:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.507 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.507 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.507 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.507 16:24:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.507 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.507 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.507 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.507 16:24:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.507 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.507 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.507 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.507 16:24:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.507 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.507 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.507 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.507 16:24:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.507 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.507 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.507 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.507 16:24:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.507 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.507 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.507 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.507 16:24:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.507 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.507 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.507 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.507 16:24:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.507 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.507 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.507 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.507 16:24:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.507 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.507 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.507 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.507 16:24:24 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.507 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.507 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.507 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.507 16:24:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.507 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.507 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.507 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.507 16:24:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.507 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.507 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.507 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.507 16:24:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.507 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.507 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.507 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.507 16:24:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.507 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.507 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.507 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.507 16:24:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.507 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.507 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.507 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.507 16:24:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.507 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.507 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.507 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.507 16:24:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.507 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.507 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.507 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.507 16:24:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.507 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.507 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.507 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.507 16:24:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.507 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.507 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.507 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.507 16:24:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.507 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.507 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.507 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.507 16:24:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.507 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.507 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.507 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.507 16:24:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.507 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.507 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.507 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.507 16:24:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.507 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.507 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.507 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.507 16:24:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.507 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.507 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.507 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.507 16:24:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.766 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.766 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.766 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.766 16:24:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.766 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.766 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.766 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.766 16:24:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.767 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.767 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.767 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.767 16:24:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.767 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.767 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.767 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.767 16:24:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.767 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.767 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.767 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.767 16:24:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.767 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.767 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.767 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.767 16:24:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.767 16:24:24 -- setup/common.sh@32 -- # continue 00:04:47.767 16:24:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:47.767 16:24:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:47.767 16:24:24 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.767 16:24:24 -- setup/common.sh@33 -- # echo 0 00:04:47.767 16:24:24 -- setup/common.sh@33 -- # return 0 00:04:47.767 16:24:24 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:47.767 16:24:24 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:47.767 16:24:24 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:47.767 node0=1024 expecting 1024 00:04:47.767 16:24:24 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:47.767 16:24:24 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:47.767 16:24:24 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:47.767 00:04:47.767 real 0m1.266s 00:04:47.767 user 0m0.561s 00:04:47.767 sys 0m0.711s 00:04:47.767 ************************************ 00:04:47.767 END TEST no_shrink_alloc 00:04:47.767 ************************************ 00:04:47.767 16:24:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:47.767 16:24:24 -- common/autotest_common.sh@10 -- # set +x 00:04:47.767 16:24:25 -- setup/hugepages.sh@217 -- # clear_hp 00:04:47.767 16:24:25 -- setup/hugepages.sh@37 -- # local node hp 00:04:47.767 16:24:25 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:47.767 16:24:25 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:47.767 16:24:25 -- setup/hugepages.sh@41 -- # echo 0 00:04:47.767 16:24:25 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:47.767 16:24:25 -- setup/hugepages.sh@41 -- # echo 0 00:04:47.767 16:24:25 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:47.767 16:24:25 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:47.767 ************************************ 00:04:47.767 END TEST hugepages 00:04:47.767 ************************************ 00:04:47.767 00:04:47.767 real 0m5.514s 00:04:47.767 user 0m2.474s 00:04:47.767 sys 0m2.936s 00:04:47.767 16:24:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:47.767 16:24:25 -- common/autotest_common.sh@10 -- # set +x 00:04:47.767 16:24:25 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:47.767 16:24:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:47.767 16:24:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:47.767 16:24:25 -- common/autotest_common.sh@10 -- # set +x 00:04:47.767 ************************************ 00:04:47.767 START TEST driver 00:04:47.767 ************************************ 00:04:47.767 16:24:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:47.767 * Looking for test storage... 00:04:47.767 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:47.767 16:24:25 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:47.767 16:24:25 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:47.767 16:24:25 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:48.026 16:24:25 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:48.026 16:24:25 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:48.026 16:24:25 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:48.026 16:24:25 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:48.026 16:24:25 -- scripts/common.sh@335 -- # IFS=.-: 00:04:48.026 16:24:25 -- scripts/common.sh@335 -- # read -ra ver1 00:04:48.026 16:24:25 -- scripts/common.sh@336 -- # IFS=.-: 00:04:48.026 16:24:25 -- scripts/common.sh@336 -- # read -ra ver2 00:04:48.026 16:24:25 -- scripts/common.sh@337 -- # local 'op=<' 00:04:48.026 16:24:25 -- scripts/common.sh@339 -- # ver1_l=2 00:04:48.026 16:24:25 -- scripts/common.sh@340 -- # ver2_l=1 00:04:48.026 16:24:25 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:48.026 16:24:25 -- scripts/common.sh@343 -- # case "$op" in 00:04:48.026 16:24:25 -- scripts/common.sh@344 -- # : 1 00:04:48.026 16:24:25 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:48.026 16:24:25 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:48.026 16:24:25 -- scripts/common.sh@364 -- # decimal 1 00:04:48.026 16:24:25 -- scripts/common.sh@352 -- # local d=1 00:04:48.026 16:24:25 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:48.026 16:24:25 -- scripts/common.sh@354 -- # echo 1 00:04:48.026 16:24:25 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:48.026 16:24:25 -- scripts/common.sh@365 -- # decimal 2 00:04:48.026 16:24:25 -- scripts/common.sh@352 -- # local d=2 00:04:48.026 16:24:25 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:48.026 16:24:25 -- scripts/common.sh@354 -- # echo 2 00:04:48.026 16:24:25 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:48.026 16:24:25 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:48.026 16:24:25 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:48.026 16:24:25 -- scripts/common.sh@367 -- # return 0 00:04:48.026 16:24:25 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:48.026 16:24:25 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:48.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.026 --rc genhtml_branch_coverage=1 00:04:48.026 --rc genhtml_function_coverage=1 00:04:48.026 --rc genhtml_legend=1 00:04:48.026 --rc geninfo_all_blocks=1 00:04:48.026 --rc geninfo_unexecuted_blocks=1 00:04:48.026 00:04:48.026 ' 00:04:48.026 16:24:25 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:48.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.026 --rc genhtml_branch_coverage=1 00:04:48.026 --rc genhtml_function_coverage=1 00:04:48.026 --rc genhtml_legend=1 00:04:48.026 --rc geninfo_all_blocks=1 00:04:48.026 --rc geninfo_unexecuted_blocks=1 00:04:48.026 00:04:48.026 ' 00:04:48.026 16:24:25 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:48.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.026 --rc genhtml_branch_coverage=1 00:04:48.026 --rc genhtml_function_coverage=1 00:04:48.026 --rc genhtml_legend=1 00:04:48.026 --rc geninfo_all_blocks=1 00:04:48.026 --rc geninfo_unexecuted_blocks=1 00:04:48.026 00:04:48.026 ' 00:04:48.026 16:24:25 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:48.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.026 --rc genhtml_branch_coverage=1 00:04:48.026 --rc genhtml_function_coverage=1 00:04:48.026 --rc genhtml_legend=1 00:04:48.026 --rc geninfo_all_blocks=1 00:04:48.026 --rc geninfo_unexecuted_blocks=1 00:04:48.026 00:04:48.026 ' 00:04:48.026 16:24:25 -- setup/driver.sh@68 -- # setup reset 00:04:48.026 16:24:25 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:48.026 16:24:25 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:48.595 16:24:25 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:48.595 16:24:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:48.595 16:24:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:48.595 16:24:25 -- common/autotest_common.sh@10 -- # set +x 00:04:48.595 ************************************ 00:04:48.595 START TEST guess_driver 00:04:48.595 ************************************ 00:04:48.595 16:24:25 -- common/autotest_common.sh@1114 -- # guess_driver 00:04:48.595 16:24:25 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:48.595 16:24:25 -- setup/driver.sh@47 -- # local fail=0 00:04:48.595 16:24:25 -- setup/driver.sh@49 -- # pick_driver 00:04:48.595 16:24:25 -- setup/driver.sh@36 -- # vfio 00:04:48.595 16:24:25 -- setup/driver.sh@21 -- # local iommu_grups 00:04:48.595 16:24:25 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:48.595 16:24:25 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:48.595 16:24:25 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:48.595 16:24:25 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:48.595 16:24:25 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:48.595 16:24:25 -- setup/driver.sh@32 -- # return 1 00:04:48.595 16:24:25 -- setup/driver.sh@38 -- # uio 00:04:48.595 16:24:25 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:48.595 16:24:25 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:48.595 16:24:25 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:48.595 16:24:25 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:48.595 16:24:25 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio.ko.xz 00:04:48.595 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:04:48.595 16:24:25 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:48.595 Looking for driver=uio_pci_generic 00:04:48.595 16:24:25 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:48.595 16:24:25 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:48.595 16:24:25 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:48.595 16:24:25 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:48.595 16:24:25 -- setup/driver.sh@45 -- # setup output config 00:04:48.595 16:24:25 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:48.595 16:24:25 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:49.534 16:24:26 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:49.534 16:24:26 -- setup/driver.sh@58 -- # continue 00:04:49.534 16:24:26 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:49.534 16:24:26 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:49.534 16:24:26 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:49.534 16:24:26 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:49.534 16:24:26 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:49.534 16:24:26 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:49.534 16:24:26 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:49.534 16:24:26 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:49.534 16:24:26 -- setup/driver.sh@65 -- # setup reset 00:04:49.534 16:24:26 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:49.534 16:24:26 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:50.102 00:04:50.102 real 0m1.553s 00:04:50.102 user 0m0.628s 00:04:50.102 sys 0m0.937s 00:04:50.102 16:24:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:50.102 16:24:27 -- common/autotest_common.sh@10 -- # set +x 00:04:50.102 ************************************ 00:04:50.102 END TEST guess_driver 00:04:50.102 ************************************ 00:04:50.102 ************************************ 00:04:50.102 END TEST driver 00:04:50.102 ************************************ 00:04:50.102 00:04:50.102 real 0m2.428s 00:04:50.102 user 0m0.983s 00:04:50.102 sys 0m1.526s 00:04:50.102 16:24:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:50.102 16:24:27 -- common/autotest_common.sh@10 -- # set +x 00:04:50.102 16:24:27 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:50.102 16:24:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:50.102 16:24:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:50.102 16:24:27 -- common/autotest_common.sh@10 -- # set +x 00:04:50.360 ************************************ 00:04:50.360 START TEST devices 00:04:50.360 ************************************ 00:04:50.360 16:24:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:50.360 * Looking for test storage... 00:04:50.360 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:50.360 16:24:27 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:50.360 16:24:27 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:50.360 16:24:27 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:50.360 16:24:27 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:50.360 16:24:27 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:50.360 16:24:27 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:50.360 16:24:27 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:50.360 16:24:27 -- scripts/common.sh@335 -- # IFS=.-: 00:04:50.360 16:24:27 -- scripts/common.sh@335 -- # read -ra ver1 00:04:50.360 16:24:27 -- scripts/common.sh@336 -- # IFS=.-: 00:04:50.360 16:24:27 -- scripts/common.sh@336 -- # read -ra ver2 00:04:50.360 16:24:27 -- scripts/common.sh@337 -- # local 'op=<' 00:04:50.360 16:24:27 -- scripts/common.sh@339 -- # ver1_l=2 00:04:50.360 16:24:27 -- scripts/common.sh@340 -- # ver2_l=1 00:04:50.360 16:24:27 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:50.360 16:24:27 -- scripts/common.sh@343 -- # case "$op" in 00:04:50.360 16:24:27 -- scripts/common.sh@344 -- # : 1 00:04:50.360 16:24:27 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:50.360 16:24:27 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:50.360 16:24:27 -- scripts/common.sh@364 -- # decimal 1 00:04:50.360 16:24:27 -- scripts/common.sh@352 -- # local d=1 00:04:50.360 16:24:27 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:50.360 16:24:27 -- scripts/common.sh@354 -- # echo 1 00:04:50.360 16:24:27 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:50.360 16:24:27 -- scripts/common.sh@365 -- # decimal 2 00:04:50.360 16:24:27 -- scripts/common.sh@352 -- # local d=2 00:04:50.360 16:24:27 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:50.360 16:24:27 -- scripts/common.sh@354 -- # echo 2 00:04:50.360 16:24:27 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:50.360 16:24:27 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:50.360 16:24:27 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:50.360 16:24:27 -- scripts/common.sh@367 -- # return 0 00:04:50.360 16:24:27 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:50.360 16:24:27 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:50.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.360 --rc genhtml_branch_coverage=1 00:04:50.360 --rc genhtml_function_coverage=1 00:04:50.360 --rc genhtml_legend=1 00:04:50.360 --rc geninfo_all_blocks=1 00:04:50.360 --rc geninfo_unexecuted_blocks=1 00:04:50.360 00:04:50.360 ' 00:04:50.360 16:24:27 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:50.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.360 --rc genhtml_branch_coverage=1 00:04:50.360 --rc genhtml_function_coverage=1 00:04:50.360 --rc genhtml_legend=1 00:04:50.360 --rc geninfo_all_blocks=1 00:04:50.360 --rc geninfo_unexecuted_blocks=1 00:04:50.360 00:04:50.360 ' 00:04:50.360 16:24:27 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:50.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.360 --rc genhtml_branch_coverage=1 00:04:50.360 --rc genhtml_function_coverage=1 00:04:50.360 --rc genhtml_legend=1 00:04:50.360 --rc geninfo_all_blocks=1 00:04:50.360 --rc geninfo_unexecuted_blocks=1 00:04:50.360 00:04:50.360 ' 00:04:50.360 16:24:27 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:50.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.360 --rc genhtml_branch_coverage=1 00:04:50.360 --rc genhtml_function_coverage=1 00:04:50.360 --rc genhtml_legend=1 00:04:50.360 --rc geninfo_all_blocks=1 00:04:50.360 --rc geninfo_unexecuted_blocks=1 00:04:50.360 00:04:50.360 ' 00:04:50.360 16:24:27 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:50.360 16:24:27 -- setup/devices.sh@192 -- # setup reset 00:04:50.360 16:24:27 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:50.360 16:24:27 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:51.294 16:24:28 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:51.294 16:24:28 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:51.294 16:24:28 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:51.294 16:24:28 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:51.294 16:24:28 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:51.294 16:24:28 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:51.294 16:24:28 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:51.294 16:24:28 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:51.294 16:24:28 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:51.294 16:24:28 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:51.294 16:24:28 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:04:51.294 16:24:28 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:04:51.294 16:24:28 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:51.294 16:24:28 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:51.294 16:24:28 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:51.294 16:24:28 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:04:51.294 16:24:28 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:04:51.294 16:24:28 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:51.294 16:24:28 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:51.294 16:24:28 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:51.294 16:24:28 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:04:51.294 16:24:28 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:04:51.294 16:24:28 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:51.294 16:24:28 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:51.294 16:24:28 -- setup/devices.sh@196 -- # blocks=() 00:04:51.294 16:24:28 -- setup/devices.sh@196 -- # declare -a blocks 00:04:51.294 16:24:28 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:51.294 16:24:28 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:51.294 16:24:28 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:51.294 16:24:28 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:51.294 16:24:28 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:51.294 16:24:28 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:51.294 16:24:28 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:04:51.294 16:24:28 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:51.294 16:24:28 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:51.294 16:24:28 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:04:51.294 16:24:28 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:51.294 No valid GPT data, bailing 00:04:51.294 16:24:28 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:51.294 16:24:28 -- scripts/common.sh@393 -- # pt= 00:04:51.294 16:24:28 -- scripts/common.sh@394 -- # return 1 00:04:51.294 16:24:28 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:51.294 16:24:28 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:51.294 16:24:28 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:51.294 16:24:28 -- setup/common.sh@80 -- # echo 5368709120 00:04:51.294 16:24:28 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:51.294 16:24:28 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:51.294 16:24:28 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:04:51.294 16:24:28 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:51.294 16:24:28 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:51.294 16:24:28 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:51.294 16:24:28 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:04:51.294 16:24:28 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:51.294 16:24:28 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:51.294 16:24:28 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:04:51.294 16:24:28 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:04:51.294 No valid GPT data, bailing 00:04:51.294 16:24:28 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:51.294 16:24:28 -- scripts/common.sh@393 -- # pt= 00:04:51.294 16:24:28 -- scripts/common.sh@394 -- # return 1 00:04:51.294 16:24:28 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:51.294 16:24:28 -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:51.294 16:24:28 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:51.294 16:24:28 -- setup/common.sh@80 -- # echo 4294967296 00:04:51.294 16:24:28 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:51.294 16:24:28 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:51.294 16:24:28 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:04:51.294 16:24:28 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:51.294 16:24:28 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:04:51.294 16:24:28 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:51.294 16:24:28 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:04:51.294 16:24:28 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:51.294 16:24:28 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:04:51.294 16:24:28 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:04:51.294 16:24:28 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:04:51.553 No valid GPT data, bailing 00:04:51.553 16:24:28 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:51.553 16:24:28 -- scripts/common.sh@393 -- # pt= 00:04:51.553 16:24:28 -- scripts/common.sh@394 -- # return 1 00:04:51.553 16:24:28 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:04:51.553 16:24:28 -- setup/common.sh@76 -- # local dev=nvme1n2 00:04:51.553 16:24:28 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:04:51.553 16:24:28 -- setup/common.sh@80 -- # echo 4294967296 00:04:51.553 16:24:28 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:51.553 16:24:28 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:51.553 16:24:28 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:04:51.553 16:24:28 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:51.553 16:24:28 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:04:51.553 16:24:28 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:51.553 16:24:28 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:04:51.553 16:24:28 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:51.553 16:24:28 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:04:51.553 16:24:28 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:04:51.553 16:24:28 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:04:51.553 No valid GPT data, bailing 00:04:51.553 16:24:28 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:51.553 16:24:28 -- scripts/common.sh@393 -- # pt= 00:04:51.553 16:24:28 -- scripts/common.sh@394 -- # return 1 00:04:51.553 16:24:28 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:04:51.553 16:24:28 -- setup/common.sh@76 -- # local dev=nvme1n3 00:04:51.554 16:24:28 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:04:51.554 16:24:28 -- setup/common.sh@80 -- # echo 4294967296 00:04:51.554 16:24:28 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:51.554 16:24:28 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:51.554 16:24:28 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:04:51.554 16:24:28 -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:04:51.554 16:24:28 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:51.554 16:24:28 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:51.554 16:24:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:51.554 16:24:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:51.554 16:24:28 -- common/autotest_common.sh@10 -- # set +x 00:04:51.554 ************************************ 00:04:51.554 START TEST nvme_mount 00:04:51.554 ************************************ 00:04:51.554 16:24:28 -- common/autotest_common.sh@1114 -- # nvme_mount 00:04:51.554 16:24:28 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:51.554 16:24:28 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:51.554 16:24:28 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:51.554 16:24:28 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:51.554 16:24:28 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:51.554 16:24:28 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:51.554 16:24:28 -- setup/common.sh@40 -- # local part_no=1 00:04:51.554 16:24:28 -- setup/common.sh@41 -- # local size=1073741824 00:04:51.554 16:24:28 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:51.554 16:24:28 -- setup/common.sh@44 -- # parts=() 00:04:51.554 16:24:28 -- setup/common.sh@44 -- # local parts 00:04:51.554 16:24:28 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:51.554 16:24:28 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:51.554 16:24:28 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:51.554 16:24:28 -- setup/common.sh@46 -- # (( part++ )) 00:04:51.554 16:24:28 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:51.554 16:24:28 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:51.554 16:24:28 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:51.554 16:24:28 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:52.490 Creating new GPT entries in memory. 00:04:52.490 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:52.490 other utilities. 00:04:52.490 16:24:29 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:52.490 16:24:29 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:52.490 16:24:29 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:52.490 16:24:29 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:52.490 16:24:29 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:53.863 Creating new GPT entries in memory. 00:04:53.863 The operation has completed successfully. 00:04:53.863 16:24:31 -- setup/common.sh@57 -- # (( part++ )) 00:04:53.863 16:24:31 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:53.863 16:24:31 -- setup/common.sh@62 -- # wait 65872 00:04:53.863 16:24:31 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:53.863 16:24:31 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:53.863 16:24:31 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:53.863 16:24:31 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:53.863 16:24:31 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:53.863 16:24:31 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:53.863 16:24:31 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:53.863 16:24:31 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:53.863 16:24:31 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:53.863 16:24:31 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:53.863 16:24:31 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:53.863 16:24:31 -- setup/devices.sh@53 -- # local found=0 00:04:53.863 16:24:31 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:53.863 16:24:31 -- setup/devices.sh@56 -- # : 00:04:53.863 16:24:31 -- setup/devices.sh@59 -- # local pci status 00:04:53.863 16:24:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.863 16:24:31 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:53.863 16:24:31 -- setup/devices.sh@47 -- # setup output config 00:04:53.863 16:24:31 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:53.863 16:24:31 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:53.863 16:24:31 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:53.863 16:24:31 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:53.863 16:24:31 -- setup/devices.sh@63 -- # found=1 00:04:53.863 16:24:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.863 16:24:31 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:53.863 16:24:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.431 16:24:31 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:54.431 16:24:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.431 16:24:31 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:54.431 16:24:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.431 16:24:31 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:54.431 16:24:31 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:54.431 16:24:31 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:54.431 16:24:31 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:54.431 16:24:31 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:54.431 16:24:31 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:54.431 16:24:31 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:54.431 16:24:31 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:54.431 16:24:31 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:54.431 16:24:31 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:54.431 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:54.431 16:24:31 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:54.431 16:24:31 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:54.690 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:54.690 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:54.690 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:54.690 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:54.690 16:24:32 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:54.690 16:24:32 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:54.690 16:24:32 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:54.690 16:24:32 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:54.690 16:24:32 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:54.690 16:24:32 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:54.690 16:24:32 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:54.690 16:24:32 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:54.691 16:24:32 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:54.691 16:24:32 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:54.691 16:24:32 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:54.691 16:24:32 -- setup/devices.sh@53 -- # local found=0 00:04:54.691 16:24:32 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:54.691 16:24:32 -- setup/devices.sh@56 -- # : 00:04:54.691 16:24:32 -- setup/devices.sh@59 -- # local pci status 00:04:54.691 16:24:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.691 16:24:32 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:54.691 16:24:32 -- setup/devices.sh@47 -- # setup output config 00:04:54.691 16:24:32 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:54.691 16:24:32 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:54.949 16:24:32 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:54.949 16:24:32 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:54.949 16:24:32 -- setup/devices.sh@63 -- # found=1 00:04:54.949 16:24:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.950 16:24:32 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:54.950 16:24:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.208 16:24:32 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:55.208 16:24:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.467 16:24:32 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:55.467 16:24:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.467 16:24:32 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:55.467 16:24:32 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:55.467 16:24:32 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:55.467 16:24:32 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:55.467 16:24:32 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:55.467 16:24:32 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:55.467 16:24:32 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:04:55.467 16:24:32 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:55.467 16:24:32 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:55.467 16:24:32 -- setup/devices.sh@50 -- # local mount_point= 00:04:55.467 16:24:32 -- setup/devices.sh@51 -- # local test_file= 00:04:55.467 16:24:32 -- setup/devices.sh@53 -- # local found=0 00:04:55.467 16:24:32 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:55.467 16:24:32 -- setup/devices.sh@59 -- # local pci status 00:04:55.467 16:24:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.467 16:24:32 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:55.467 16:24:32 -- setup/devices.sh@47 -- # setup output config 00:04:55.467 16:24:32 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:55.467 16:24:32 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:55.725 16:24:33 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:55.725 16:24:33 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:55.725 16:24:33 -- setup/devices.sh@63 -- # found=1 00:04:55.725 16:24:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.725 16:24:33 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:55.725 16:24:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.985 16:24:33 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:55.985 16:24:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.244 16:24:33 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:56.244 16:24:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.244 16:24:33 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:56.244 16:24:33 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:56.244 16:24:33 -- setup/devices.sh@68 -- # return 0 00:04:56.244 16:24:33 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:56.244 16:24:33 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:56.244 16:24:33 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:56.244 16:24:33 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:56.244 16:24:33 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:56.244 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:56.244 00:04:56.244 real 0m4.650s 00:04:56.244 user 0m1.055s 00:04:56.244 sys 0m1.271s 00:04:56.244 16:24:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:56.244 ************************************ 00:04:56.244 END TEST nvme_mount 00:04:56.244 ************************************ 00:04:56.244 16:24:33 -- common/autotest_common.sh@10 -- # set +x 00:04:56.244 16:24:33 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:56.244 16:24:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:56.244 16:24:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:56.244 16:24:33 -- common/autotest_common.sh@10 -- # set +x 00:04:56.244 ************************************ 00:04:56.244 START TEST dm_mount 00:04:56.244 ************************************ 00:04:56.244 16:24:33 -- common/autotest_common.sh@1114 -- # dm_mount 00:04:56.244 16:24:33 -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:56.244 16:24:33 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:56.244 16:24:33 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:56.244 16:24:33 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:56.244 16:24:33 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:56.244 16:24:33 -- setup/common.sh@40 -- # local part_no=2 00:04:56.244 16:24:33 -- setup/common.sh@41 -- # local size=1073741824 00:04:56.244 16:24:33 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:56.244 16:24:33 -- setup/common.sh@44 -- # parts=() 00:04:56.245 16:24:33 -- setup/common.sh@44 -- # local parts 00:04:56.245 16:24:33 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:56.245 16:24:33 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:56.245 16:24:33 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:56.245 16:24:33 -- setup/common.sh@46 -- # (( part++ )) 00:04:56.245 16:24:33 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:56.245 16:24:33 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:56.245 16:24:33 -- setup/common.sh@46 -- # (( part++ )) 00:04:56.245 16:24:33 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:56.245 16:24:33 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:56.245 16:24:33 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:56.245 16:24:33 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:57.184 Creating new GPT entries in memory. 00:04:57.184 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:57.184 other utilities. 00:04:57.184 16:24:34 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:57.184 16:24:34 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:57.184 16:24:34 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:57.184 16:24:34 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:57.184 16:24:34 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:58.561 Creating new GPT entries in memory. 00:04:58.561 The operation has completed successfully. 00:04:58.561 16:24:35 -- setup/common.sh@57 -- # (( part++ )) 00:04:58.561 16:24:35 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:58.561 16:24:35 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:58.561 16:24:35 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:58.561 16:24:35 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:04:59.498 The operation has completed successfully. 00:04:59.498 16:24:36 -- setup/common.sh@57 -- # (( part++ )) 00:04:59.498 16:24:36 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:59.498 16:24:36 -- setup/common.sh@62 -- # wait 66331 00:04:59.498 16:24:36 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:59.498 16:24:36 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:59.498 16:24:36 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:59.498 16:24:36 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:59.498 16:24:36 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:59.498 16:24:36 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:59.498 16:24:36 -- setup/devices.sh@161 -- # break 00:04:59.498 16:24:36 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:59.498 16:24:36 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:59.498 16:24:36 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:59.498 16:24:36 -- setup/devices.sh@166 -- # dm=dm-0 00:04:59.498 16:24:36 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:59.498 16:24:36 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:59.498 16:24:36 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:59.498 16:24:36 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:59.498 16:24:36 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:59.498 16:24:36 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:59.498 16:24:36 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:59.498 16:24:36 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:59.498 16:24:36 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:59.498 16:24:36 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:59.498 16:24:36 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:59.498 16:24:36 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:59.498 16:24:36 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:59.498 16:24:36 -- setup/devices.sh@53 -- # local found=0 00:04:59.498 16:24:36 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:59.498 16:24:36 -- setup/devices.sh@56 -- # : 00:04:59.498 16:24:36 -- setup/devices.sh@59 -- # local pci status 00:04:59.498 16:24:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.498 16:24:36 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:59.498 16:24:36 -- setup/devices.sh@47 -- # setup output config 00:04:59.498 16:24:36 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:59.498 16:24:36 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:59.757 16:24:37 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:59.757 16:24:37 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:59.757 16:24:37 -- setup/devices.sh@63 -- # found=1 00:04:59.757 16:24:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.757 16:24:37 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:59.757 16:24:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.016 16:24:37 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:00.016 16:24:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.016 16:24:37 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:00.016 16:24:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.275 16:24:37 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:00.275 16:24:37 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:00.275 16:24:37 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:00.275 16:24:37 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:00.275 16:24:37 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:00.275 16:24:37 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:00.275 16:24:37 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:00.275 16:24:37 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:00.275 16:24:37 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:00.275 16:24:37 -- setup/devices.sh@50 -- # local mount_point= 00:05:00.275 16:24:37 -- setup/devices.sh@51 -- # local test_file= 00:05:00.275 16:24:37 -- setup/devices.sh@53 -- # local found=0 00:05:00.275 16:24:37 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:00.275 16:24:37 -- setup/devices.sh@59 -- # local pci status 00:05:00.275 16:24:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.275 16:24:37 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:00.275 16:24:37 -- setup/devices.sh@47 -- # setup output config 00:05:00.275 16:24:37 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:00.275 16:24:37 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:00.275 16:24:37 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:00.275 16:24:37 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:00.275 16:24:37 -- setup/devices.sh@63 -- # found=1 00:05:00.275 16:24:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.275 16:24:37 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:00.275 16:24:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.843 16:24:38 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:00.843 16:24:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.843 16:24:38 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:00.843 16:24:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.843 16:24:38 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:00.843 16:24:38 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:00.843 16:24:38 -- setup/devices.sh@68 -- # return 0 00:05:00.843 16:24:38 -- setup/devices.sh@187 -- # cleanup_dm 00:05:00.843 16:24:38 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:00.843 16:24:38 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:00.843 16:24:38 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:00.843 16:24:38 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:00.843 16:24:38 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:00.843 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:00.843 16:24:38 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:00.843 16:24:38 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:00.843 00:05:00.843 real 0m4.652s 00:05:00.843 user 0m0.700s 00:05:00.843 sys 0m0.873s 00:05:00.843 16:24:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:00.843 16:24:38 -- common/autotest_common.sh@10 -- # set +x 00:05:00.843 ************************************ 00:05:00.843 END TEST dm_mount 00:05:00.843 ************************************ 00:05:00.843 16:24:38 -- setup/devices.sh@1 -- # cleanup 00:05:00.843 16:24:38 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:00.843 16:24:38 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:01.102 16:24:38 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:01.102 16:24:38 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:01.102 16:24:38 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:01.102 16:24:38 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:01.361 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:01.361 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:01.361 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:01.361 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:01.361 16:24:38 -- setup/devices.sh@12 -- # cleanup_dm 00:05:01.361 16:24:38 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:01.361 16:24:38 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:01.361 16:24:38 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:01.361 16:24:38 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:01.361 16:24:38 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:01.361 16:24:38 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:01.361 00:05:01.361 real 0m11.037s 00:05:01.361 user 0m2.537s 00:05:01.361 sys 0m2.803s 00:05:01.361 16:24:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:01.361 16:24:38 -- common/autotest_common.sh@10 -- # set +x 00:05:01.361 ************************************ 00:05:01.361 END TEST devices 00:05:01.361 ************************************ 00:05:01.361 ************************************ 00:05:01.361 END TEST setup.sh 00:05:01.361 ************************************ 00:05:01.361 00:05:01.361 real 0m24.096s 00:05:01.361 user 0m8.223s 00:05:01.361 sys 0m10.157s 00:05:01.361 16:24:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:01.361 16:24:38 -- common/autotest_common.sh@10 -- # set +x 00:05:01.361 16:24:38 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:01.620 Hugepages 00:05:01.620 node hugesize free / total 00:05:01.620 node0 1048576kB 0 / 0 00:05:01.620 node0 2048kB 2048 / 2048 00:05:01.620 00:05:01.620 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:01.620 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:01.620 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:01.878 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:01.878 16:24:39 -- spdk/autotest.sh@128 -- # uname -s 00:05:01.878 16:24:39 -- spdk/autotest.sh@128 -- # [[ Linux == Linux ]] 00:05:01.878 16:24:39 -- spdk/autotest.sh@130 -- # nvme_namespace_revert 00:05:01.878 16:24:39 -- common/autotest_common.sh@1526 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:02.445 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:02.445 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:02.704 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:02.704 16:24:40 -- common/autotest_common.sh@1527 -- # sleep 1 00:05:03.640 16:24:41 -- common/autotest_common.sh@1528 -- # bdfs=() 00:05:03.640 16:24:41 -- common/autotest_common.sh@1528 -- # local bdfs 00:05:03.640 16:24:41 -- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs)) 00:05:03.640 16:24:41 -- common/autotest_common.sh@1529 -- # get_nvme_bdfs 00:05:03.640 16:24:41 -- common/autotest_common.sh@1508 -- # bdfs=() 00:05:03.640 16:24:41 -- common/autotest_common.sh@1508 -- # local bdfs 00:05:03.640 16:24:41 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:03.640 16:24:41 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:03.640 16:24:41 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:05:03.640 16:24:41 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:05:03.640 16:24:41 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:05:03.640 16:24:41 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:04.207 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:04.207 Waiting for block devices as requested 00:05:04.207 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:05:04.207 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:05:04.466 16:24:41 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:05:04.466 16:24:41 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:05:04.466 16:24:41 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:04.466 16:24:41 -- common/autotest_common.sh@1497 -- # grep 0000:00:06.0/nvme/nvme 00:05:04.466 16:24:41 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:04.466 16:24:41 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:05:04.466 16:24:41 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:04.466 16:24:41 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0 00:05:04.466 16:24:41 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0 00:05:04.466 16:24:41 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]] 00:05:04.466 16:24:41 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:04.466 16:24:41 -- common/autotest_common.sh@1540 -- # grep oacs 00:05:04.466 16:24:41 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:04.466 16:24:41 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:05:04.466 16:24:41 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:05:04.466 16:24:41 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:05:04.466 16:24:41 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0 00:05:04.466 16:24:41 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:05:04.466 16:24:41 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:05:04.466 16:24:41 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:05:04.466 16:24:41 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:05:04.466 16:24:41 -- common/autotest_common.sh@1552 -- # continue 00:05:04.466 16:24:41 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:05:04.466 16:24:41 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:05:04.466 16:24:41 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:04.466 16:24:41 -- common/autotest_common.sh@1497 -- # grep 0000:00:07.0/nvme/nvme 00:05:04.466 16:24:41 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:05:04.466 16:24:41 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 ]] 00:05:04.466 16:24:41 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:05:04.466 16:24:41 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme1 00:05:04.466 16:24:41 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme1 00:05:04.466 16:24:41 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme1 ]] 00:05:04.466 16:24:41 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:04.466 16:24:41 -- common/autotest_common.sh@1540 -- # grep oacs 00:05:04.466 16:24:41 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:04.466 16:24:41 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:05:04.466 16:24:41 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:05:04.466 16:24:41 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:05:04.466 16:24:41 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme1 00:05:04.466 16:24:41 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:05:04.466 16:24:41 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:05:04.466 16:24:41 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:05:04.466 16:24:41 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:05:04.466 16:24:41 -- common/autotest_common.sh@1552 -- # continue 00:05:04.466 16:24:41 -- spdk/autotest.sh@133 -- # timing_exit pre_cleanup 00:05:04.466 16:24:41 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:04.466 16:24:41 -- common/autotest_common.sh@10 -- # set +x 00:05:04.466 16:24:41 -- spdk/autotest.sh@136 -- # timing_enter afterboot 00:05:04.466 16:24:41 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:04.466 16:24:41 -- common/autotest_common.sh@10 -- # set +x 00:05:04.466 16:24:41 -- spdk/autotest.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:05.404 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:05.404 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:05.404 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:05.404 16:24:42 -- spdk/autotest.sh@138 -- # timing_exit afterboot 00:05:05.404 16:24:42 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:05.404 16:24:42 -- common/autotest_common.sh@10 -- # set +x 00:05:05.404 16:24:42 -- spdk/autotest.sh@142 -- # opal_revert_cleanup 00:05:05.404 16:24:42 -- common/autotest_common.sh@1586 -- # mapfile -t bdfs 00:05:05.404 16:24:42 -- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54 00:05:05.404 16:24:42 -- common/autotest_common.sh@1572 -- # bdfs=() 00:05:05.404 16:24:42 -- common/autotest_common.sh@1572 -- # local bdfs 00:05:05.404 16:24:42 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs 00:05:05.404 16:24:42 -- common/autotest_common.sh@1508 -- # bdfs=() 00:05:05.404 16:24:42 -- common/autotest_common.sh@1508 -- # local bdfs 00:05:05.404 16:24:42 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:05.404 16:24:42 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:05.404 16:24:42 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:05:05.663 16:24:42 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:05:05.663 16:24:42 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:05:05.663 16:24:42 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:05:05.663 16:24:42 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:05:05.663 16:24:42 -- common/autotest_common.sh@1575 -- # device=0x0010 00:05:05.663 16:24:42 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:05.663 16:24:42 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:05:05.663 16:24:42 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:05:05.663 16:24:42 -- common/autotest_common.sh@1575 -- # device=0x0010 00:05:05.663 16:24:42 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:05.663 16:24:42 -- common/autotest_common.sh@1581 -- # printf '%s\n' 00:05:05.663 16:24:42 -- common/autotest_common.sh@1587 -- # [[ -z '' ]] 00:05:05.663 16:24:42 -- common/autotest_common.sh@1588 -- # return 0 00:05:05.663 16:24:42 -- spdk/autotest.sh@148 -- # '[' 0 -eq 1 ']' 00:05:05.663 16:24:42 -- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']' 00:05:05.663 16:24:42 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:05:05.663 16:24:42 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:05:05.663 16:24:42 -- spdk/autotest.sh@160 -- # timing_enter lib 00:05:05.663 16:24:42 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:05.663 16:24:42 -- common/autotest_common.sh@10 -- # set +x 00:05:05.663 16:24:42 -- spdk/autotest.sh@162 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:05.664 16:24:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:05.664 16:24:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:05.664 16:24:42 -- common/autotest_common.sh@10 -- # set +x 00:05:05.664 ************************************ 00:05:05.664 START TEST env 00:05:05.664 ************************************ 00:05:05.664 16:24:42 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:05.664 * Looking for test storage... 00:05:05.664 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:05.664 16:24:43 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:05.664 16:24:43 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:05.664 16:24:43 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:05.664 16:24:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:05.664 16:24:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:05.664 16:24:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:05.664 16:24:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:05.664 16:24:43 -- scripts/common.sh@335 -- # IFS=.-: 00:05:05.664 16:24:43 -- scripts/common.sh@335 -- # read -ra ver1 00:05:05.664 16:24:43 -- scripts/common.sh@336 -- # IFS=.-: 00:05:05.664 16:24:43 -- scripts/common.sh@336 -- # read -ra ver2 00:05:05.664 16:24:43 -- scripts/common.sh@337 -- # local 'op=<' 00:05:05.664 16:24:43 -- scripts/common.sh@339 -- # ver1_l=2 00:05:05.664 16:24:43 -- scripts/common.sh@340 -- # ver2_l=1 00:05:05.664 16:24:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:05.664 16:24:43 -- scripts/common.sh@343 -- # case "$op" in 00:05:05.664 16:24:43 -- scripts/common.sh@344 -- # : 1 00:05:05.664 16:24:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:05.664 16:24:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:05.664 16:24:43 -- scripts/common.sh@364 -- # decimal 1 00:05:05.664 16:24:43 -- scripts/common.sh@352 -- # local d=1 00:05:05.664 16:24:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:05.664 16:24:43 -- scripts/common.sh@354 -- # echo 1 00:05:05.664 16:24:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:05.664 16:24:43 -- scripts/common.sh@365 -- # decimal 2 00:05:05.664 16:24:43 -- scripts/common.sh@352 -- # local d=2 00:05:05.664 16:24:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:05.664 16:24:43 -- scripts/common.sh@354 -- # echo 2 00:05:05.664 16:24:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:05.664 16:24:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:05.664 16:24:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:05.664 16:24:43 -- scripts/common.sh@367 -- # return 0 00:05:05.664 16:24:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:05.664 16:24:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:05.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.664 --rc genhtml_branch_coverage=1 00:05:05.664 --rc genhtml_function_coverage=1 00:05:05.664 --rc genhtml_legend=1 00:05:05.664 --rc geninfo_all_blocks=1 00:05:05.664 --rc geninfo_unexecuted_blocks=1 00:05:05.664 00:05:05.664 ' 00:05:05.664 16:24:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:05.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.664 --rc genhtml_branch_coverage=1 00:05:05.664 --rc genhtml_function_coverage=1 00:05:05.664 --rc genhtml_legend=1 00:05:05.664 --rc geninfo_all_blocks=1 00:05:05.664 --rc geninfo_unexecuted_blocks=1 00:05:05.664 00:05:05.664 ' 00:05:05.664 16:24:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:05.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.664 --rc genhtml_branch_coverage=1 00:05:05.664 --rc genhtml_function_coverage=1 00:05:05.664 --rc genhtml_legend=1 00:05:05.664 --rc geninfo_all_blocks=1 00:05:05.664 --rc geninfo_unexecuted_blocks=1 00:05:05.664 00:05:05.664 ' 00:05:05.664 16:24:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:05.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.664 --rc genhtml_branch_coverage=1 00:05:05.664 --rc genhtml_function_coverage=1 00:05:05.664 --rc genhtml_legend=1 00:05:05.664 --rc geninfo_all_blocks=1 00:05:05.664 --rc geninfo_unexecuted_blocks=1 00:05:05.664 00:05:05.664 ' 00:05:05.664 16:24:43 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:05.664 16:24:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:05.664 16:24:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:05.664 16:24:43 -- common/autotest_common.sh@10 -- # set +x 00:05:05.664 ************************************ 00:05:05.664 START TEST env_memory 00:05:05.664 ************************************ 00:05:05.664 16:24:43 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:05.664 00:05:05.664 00:05:05.664 CUnit - A unit testing framework for C - Version 2.1-3 00:05:05.664 http://cunit.sourceforge.net/ 00:05:05.664 00:05:05.664 00:05:05.664 Suite: memory 00:05:05.923 Test: alloc and free memory map ...[2024-11-16 16:24:43.178186] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:05.923 passed 00:05:05.923 Test: mem map translation ...[2024-11-16 16:24:43.209806] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:05.923 [2024-11-16 16:24:43.209962] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:05.923 [2024-11-16 16:24:43.210112] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:05.923 [2024-11-16 16:24:43.210220] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:05.923 passed 00:05:05.923 Test: mem map registration ...[2024-11-16 16:24:43.271091] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:05.923 [2024-11-16 16:24:43.271189] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:05.923 passed 00:05:05.923 Test: mem map adjacent registrations ...passed 00:05:05.923 00:05:05.923 Run Summary: Type Total Ran Passed Failed Inactive 00:05:05.923 suites 1 1 n/a 0 0 00:05:05.923 tests 4 4 4 0 0 00:05:05.923 asserts 152 152 152 0 n/a 00:05:05.923 00:05:05.923 Elapsed time = 0.178 seconds 00:05:05.923 00:05:05.923 real 0m0.197s 00:05:05.923 user 0m0.178s 00:05:05.923 sys 0m0.014s 00:05:05.923 16:24:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:05.923 16:24:43 -- common/autotest_common.sh@10 -- # set +x 00:05:05.923 ************************************ 00:05:05.923 END TEST env_memory 00:05:05.923 ************************************ 00:05:05.923 16:24:43 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:05.923 16:24:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:05.923 16:24:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:05.923 16:24:43 -- common/autotest_common.sh@10 -- # set +x 00:05:05.923 ************************************ 00:05:05.923 START TEST env_vtophys 00:05:05.924 ************************************ 00:05:05.924 16:24:43 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:05.924 EAL: lib.eal log level changed from notice to debug 00:05:05.924 EAL: Detected lcore 0 as core 0 on socket 0 00:05:05.924 EAL: Detected lcore 1 as core 0 on socket 0 00:05:05.924 EAL: Detected lcore 2 as core 0 on socket 0 00:05:05.924 EAL: Detected lcore 3 as core 0 on socket 0 00:05:05.924 EAL: Detected lcore 4 as core 0 on socket 0 00:05:05.924 EAL: Detected lcore 5 as core 0 on socket 0 00:05:05.924 EAL: Detected lcore 6 as core 0 on socket 0 00:05:05.924 EAL: Detected lcore 7 as core 0 on socket 0 00:05:05.924 EAL: Detected lcore 8 as core 0 on socket 0 00:05:05.924 EAL: Detected lcore 9 as core 0 on socket 0 00:05:05.924 EAL: Maximum logical cores by configuration: 128 00:05:05.924 EAL: Detected CPU lcores: 10 00:05:05.924 EAL: Detected NUMA nodes: 1 00:05:05.924 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:05.924 EAL: Detected shared linkage of DPDK 00:05:05.924 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:05.924 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:05.924 EAL: Registered [vdev] bus. 00:05:05.924 EAL: bus.vdev log level changed from disabled to notice 00:05:05.924 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:05.924 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:05.924 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:05.924 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:05.924 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:05.924 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:05.924 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:05.924 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:05.924 EAL: No shared files mode enabled, IPC will be disabled 00:05:05.924 EAL: No shared files mode enabled, IPC is disabled 00:05:05.924 EAL: Selected IOVA mode 'PA' 00:05:05.924 EAL: Probing VFIO support... 00:05:05.924 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:05.924 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:05.924 EAL: Ask a virtual area of 0x2e000 bytes 00:05:05.924 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:05.924 EAL: Setting up physically contiguous memory... 00:05:06.188 EAL: Setting maximum number of open files to 524288 00:05:06.188 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:06.188 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:06.188 EAL: Ask a virtual area of 0x61000 bytes 00:05:06.188 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:06.188 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:06.188 EAL: Ask a virtual area of 0x400000000 bytes 00:05:06.188 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:06.188 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:06.188 EAL: Ask a virtual area of 0x61000 bytes 00:05:06.188 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:06.188 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:06.188 EAL: Ask a virtual area of 0x400000000 bytes 00:05:06.188 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:06.188 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:06.188 EAL: Ask a virtual area of 0x61000 bytes 00:05:06.188 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:06.188 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:06.188 EAL: Ask a virtual area of 0x400000000 bytes 00:05:06.188 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:06.188 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:06.188 EAL: Ask a virtual area of 0x61000 bytes 00:05:06.188 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:06.188 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:06.188 EAL: Ask a virtual area of 0x400000000 bytes 00:05:06.188 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:06.188 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:06.188 EAL: Hugepages will be freed exactly as allocated. 00:05:06.188 EAL: No shared files mode enabled, IPC is disabled 00:05:06.188 EAL: No shared files mode enabled, IPC is disabled 00:05:06.188 EAL: TSC frequency is ~2200000 KHz 00:05:06.188 EAL: Main lcore 0 is ready (tid=7f075f4eaa00;cpuset=[0]) 00:05:06.188 EAL: Trying to obtain current memory policy. 00:05:06.188 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.188 EAL: Restoring previous memory policy: 0 00:05:06.188 EAL: request: mp_malloc_sync 00:05:06.188 EAL: No shared files mode enabled, IPC is disabled 00:05:06.188 EAL: Heap on socket 0 was expanded by 2MB 00:05:06.188 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:06.188 EAL: No shared files mode enabled, IPC is disabled 00:05:06.188 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:06.188 EAL: Mem event callback 'spdk:(nil)' registered 00:05:06.188 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:06.188 00:05:06.188 00:05:06.188 CUnit - A unit testing framework for C - Version 2.1-3 00:05:06.188 http://cunit.sourceforge.net/ 00:05:06.188 00:05:06.188 00:05:06.188 Suite: components_suite 00:05:06.188 Test: vtophys_malloc_test ...passed 00:05:06.188 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:06.188 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.188 EAL: Restoring previous memory policy: 4 00:05:06.188 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.188 EAL: request: mp_malloc_sync 00:05:06.188 EAL: No shared files mode enabled, IPC is disabled 00:05:06.188 EAL: Heap on socket 0 was expanded by 4MB 00:05:06.188 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.188 EAL: request: mp_malloc_sync 00:05:06.188 EAL: No shared files mode enabled, IPC is disabled 00:05:06.188 EAL: Heap on socket 0 was shrunk by 4MB 00:05:06.188 EAL: Trying to obtain current memory policy. 00:05:06.188 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.188 EAL: Restoring previous memory policy: 4 00:05:06.188 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.188 EAL: request: mp_malloc_sync 00:05:06.188 EAL: No shared files mode enabled, IPC is disabled 00:05:06.188 EAL: Heap on socket 0 was expanded by 6MB 00:05:06.188 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.188 EAL: request: mp_malloc_sync 00:05:06.188 EAL: No shared files mode enabled, IPC is disabled 00:05:06.188 EAL: Heap on socket 0 was shrunk by 6MB 00:05:06.188 EAL: Trying to obtain current memory policy. 00:05:06.188 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.188 EAL: Restoring previous memory policy: 4 00:05:06.188 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.188 EAL: request: mp_malloc_sync 00:05:06.188 EAL: No shared files mode enabled, IPC is disabled 00:05:06.188 EAL: Heap on socket 0 was expanded by 10MB 00:05:06.188 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.188 EAL: request: mp_malloc_sync 00:05:06.188 EAL: No shared files mode enabled, IPC is disabled 00:05:06.188 EAL: Heap on socket 0 was shrunk by 10MB 00:05:06.188 EAL: Trying to obtain current memory policy. 00:05:06.188 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.188 EAL: Restoring previous memory policy: 4 00:05:06.188 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.188 EAL: request: mp_malloc_sync 00:05:06.188 EAL: No shared files mode enabled, IPC is disabled 00:05:06.188 EAL: Heap on socket 0 was expanded by 18MB 00:05:06.188 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.188 EAL: request: mp_malloc_sync 00:05:06.188 EAL: No shared files mode enabled, IPC is disabled 00:05:06.188 EAL: Heap on socket 0 was shrunk by 18MB 00:05:06.188 EAL: Trying to obtain current memory policy. 00:05:06.188 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.188 EAL: Restoring previous memory policy: 4 00:05:06.188 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.188 EAL: request: mp_malloc_sync 00:05:06.188 EAL: No shared files mode enabled, IPC is disabled 00:05:06.188 EAL: Heap on socket 0 was expanded by 34MB 00:05:06.188 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.188 EAL: request: mp_malloc_sync 00:05:06.188 EAL: No shared files mode enabled, IPC is disabled 00:05:06.188 EAL: Heap on socket 0 was shrunk by 34MB 00:05:06.188 EAL: Trying to obtain current memory policy. 00:05:06.188 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.188 EAL: Restoring previous memory policy: 4 00:05:06.188 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.188 EAL: request: mp_malloc_sync 00:05:06.188 EAL: No shared files mode enabled, IPC is disabled 00:05:06.188 EAL: Heap on socket 0 was expanded by 66MB 00:05:06.188 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.188 EAL: request: mp_malloc_sync 00:05:06.188 EAL: No shared files mode enabled, IPC is disabled 00:05:06.188 EAL: Heap on socket 0 was shrunk by 66MB 00:05:06.188 EAL: Trying to obtain current memory policy. 00:05:06.188 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.188 EAL: Restoring previous memory policy: 4 00:05:06.188 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.189 EAL: request: mp_malloc_sync 00:05:06.189 EAL: No shared files mode enabled, IPC is disabled 00:05:06.189 EAL: Heap on socket 0 was expanded by 130MB 00:05:06.189 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.465 EAL: request: mp_malloc_sync 00:05:06.465 EAL: No shared files mode enabled, IPC is disabled 00:05:06.465 EAL: Heap on socket 0 was shrunk by 130MB 00:05:06.465 EAL: Trying to obtain current memory policy. 00:05:06.465 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.465 EAL: Restoring previous memory policy: 4 00:05:06.465 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.465 EAL: request: mp_malloc_sync 00:05:06.465 EAL: No shared files mode enabled, IPC is disabled 00:05:06.465 EAL: Heap on socket 0 was expanded by 258MB 00:05:06.465 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.465 EAL: request: mp_malloc_sync 00:05:06.465 EAL: No shared files mode enabled, IPC is disabled 00:05:06.465 EAL: Heap on socket 0 was shrunk by 258MB 00:05:06.465 EAL: Trying to obtain current memory policy. 00:05:06.465 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.740 EAL: Restoring previous memory policy: 4 00:05:06.740 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.740 EAL: request: mp_malloc_sync 00:05:06.740 EAL: No shared files mode enabled, IPC is disabled 00:05:06.740 EAL: Heap on socket 0 was expanded by 514MB 00:05:06.740 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.740 EAL: request: mp_malloc_sync 00:05:06.740 EAL: No shared files mode enabled, IPC is disabled 00:05:06.740 EAL: Heap on socket 0 was shrunk by 514MB 00:05:06.740 EAL: Trying to obtain current memory policy. 00:05:06.740 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.999 EAL: Restoring previous memory policy: 4 00:05:06.999 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.999 EAL: request: mp_malloc_sync 00:05:06.999 EAL: No shared files mode enabled, IPC is disabled 00:05:06.999 EAL: Heap on socket 0 was expanded by 1026MB 00:05:07.258 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.517 passed 00:05:07.517 00:05:07.517 Run Summary: Type Total Ran Passed Failed Inactive 00:05:07.517 suites 1 1 n/a 0 0 00:05:07.517 tests 2 2 2 0 0 00:05:07.517 asserts 5176 5176 5176 0 n/a 00:05:07.517 00:05:07.517 Elapsed time = 1.243 seconds 00:05:07.517 EAL: request: mp_malloc_sync 00:05:07.517 EAL: No shared files mode enabled, IPC is disabled 00:05:07.517 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:07.517 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.517 EAL: request: mp_malloc_sync 00:05:07.517 EAL: No shared files mode enabled, IPC is disabled 00:05:07.517 EAL: Heap on socket 0 was shrunk by 2MB 00:05:07.517 EAL: No shared files mode enabled, IPC is disabled 00:05:07.517 EAL: No shared files mode enabled, IPC is disabled 00:05:07.517 EAL: No shared files mode enabled, IPC is disabled 00:05:07.517 00:05:07.517 real 0m1.439s 00:05:07.517 user 0m0.793s 00:05:07.517 sys 0m0.509s 00:05:07.517 16:24:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:07.518 ************************************ 00:05:07.518 END TEST env_vtophys 00:05:07.518 ************************************ 00:05:07.518 16:24:44 -- common/autotest_common.sh@10 -- # set +x 00:05:07.518 16:24:44 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:07.518 16:24:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:07.518 16:24:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:07.518 16:24:44 -- common/autotest_common.sh@10 -- # set +x 00:05:07.518 ************************************ 00:05:07.518 START TEST env_pci 00:05:07.518 ************************************ 00:05:07.518 16:24:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:07.518 00:05:07.518 00:05:07.518 CUnit - A unit testing framework for C - Version 2.1-3 00:05:07.518 http://cunit.sourceforge.net/ 00:05:07.518 00:05:07.518 00:05:07.518 Suite: pci 00:05:07.518 Test: pci_hook ...[2024-11-16 16:24:44.886115] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 67475 has claimed it 00:05:07.518 passed 00:05:07.518 00:05:07.518 Run Summary: Type Total Ran Passed Failed Inactive 00:05:07.518 suites 1 1 n/a 0 0 00:05:07.518 tests 1 1 1 0 0 00:05:07.518 asserts 25 25 25 0 n/a 00:05:07.518 00:05:07.518 Elapsed time = 0.002 seconds 00:05:07.518 EAL: Cannot find device (10000:00:01.0) 00:05:07.518 EAL: Failed to attach device on primary process 00:05:07.518 00:05:07.518 real 0m0.020s 00:05:07.518 user 0m0.007s 00:05:07.518 sys 0m0.013s 00:05:07.518 16:24:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:07.518 16:24:44 -- common/autotest_common.sh@10 -- # set +x 00:05:07.518 ************************************ 00:05:07.518 END TEST env_pci 00:05:07.518 ************************************ 00:05:07.518 16:24:44 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:07.518 16:24:44 -- env/env.sh@15 -- # uname 00:05:07.518 16:24:44 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:07.518 16:24:44 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:07.518 16:24:44 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:07.518 16:24:44 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:05:07.518 16:24:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:07.518 16:24:44 -- common/autotest_common.sh@10 -- # set +x 00:05:07.518 ************************************ 00:05:07.518 START TEST env_dpdk_post_init 00:05:07.518 ************************************ 00:05:07.518 16:24:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:07.518 EAL: Detected CPU lcores: 10 00:05:07.518 EAL: Detected NUMA nodes: 1 00:05:07.518 EAL: Detected shared linkage of DPDK 00:05:07.518 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:07.518 EAL: Selected IOVA mode 'PA' 00:05:07.777 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:07.777 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:05:07.777 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:05:07.777 Starting DPDK initialization... 00:05:07.777 Starting SPDK post initialization... 00:05:07.777 SPDK NVMe probe 00:05:07.777 Attaching to 0000:00:06.0 00:05:07.777 Attaching to 0000:00:07.0 00:05:07.777 Attached to 0000:00:06.0 00:05:07.777 Attached to 0000:00:07.0 00:05:07.777 Cleaning up... 00:05:07.777 00:05:07.777 real 0m0.184s 00:05:07.777 user 0m0.041s 00:05:07.777 sys 0m0.044s 00:05:07.777 16:24:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:07.777 ************************************ 00:05:07.777 END TEST env_dpdk_post_init 00:05:07.777 ************************************ 00:05:07.777 16:24:45 -- common/autotest_common.sh@10 -- # set +x 00:05:07.777 16:24:45 -- env/env.sh@26 -- # uname 00:05:07.777 16:24:45 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:07.777 16:24:45 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:07.777 16:24:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:07.777 16:24:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:07.777 16:24:45 -- common/autotest_common.sh@10 -- # set +x 00:05:07.777 ************************************ 00:05:07.777 START TEST env_mem_callbacks 00:05:07.777 ************************************ 00:05:07.777 16:24:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:07.777 EAL: Detected CPU lcores: 10 00:05:07.777 EAL: Detected NUMA nodes: 1 00:05:07.777 EAL: Detected shared linkage of DPDK 00:05:07.777 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:07.777 EAL: Selected IOVA mode 'PA' 00:05:08.036 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:08.036 00:05:08.036 00:05:08.036 CUnit - A unit testing framework for C - Version 2.1-3 00:05:08.036 http://cunit.sourceforge.net/ 00:05:08.036 00:05:08.036 00:05:08.036 Suite: memory 00:05:08.036 Test: test ... 00:05:08.036 register 0x200000200000 2097152 00:05:08.036 malloc 3145728 00:05:08.036 register 0x200000400000 4194304 00:05:08.036 buf 0x200000500000 len 3145728 PASSED 00:05:08.036 malloc 64 00:05:08.036 buf 0x2000004fff40 len 64 PASSED 00:05:08.036 malloc 4194304 00:05:08.036 register 0x200000800000 6291456 00:05:08.036 buf 0x200000a00000 len 4194304 PASSED 00:05:08.036 free 0x200000500000 3145728 00:05:08.036 free 0x2000004fff40 64 00:05:08.036 unregister 0x200000400000 4194304 PASSED 00:05:08.036 free 0x200000a00000 4194304 00:05:08.036 unregister 0x200000800000 6291456 PASSED 00:05:08.036 malloc 8388608 00:05:08.036 register 0x200000400000 10485760 00:05:08.036 buf 0x200000600000 len 8388608 PASSED 00:05:08.036 free 0x200000600000 8388608 00:05:08.036 unregister 0x200000400000 10485760 PASSED 00:05:08.036 passed 00:05:08.036 00:05:08.036 Run Summary: Type Total Ran Passed Failed Inactive 00:05:08.037 suites 1 1 n/a 0 0 00:05:08.037 tests 1 1 1 0 0 00:05:08.037 asserts 15 15 15 0 n/a 00:05:08.037 00:05:08.037 Elapsed time = 0.008 seconds 00:05:08.037 00:05:08.037 real 0m0.144s 00:05:08.037 user 0m0.017s 00:05:08.037 sys 0m0.027s 00:05:08.037 16:24:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:08.037 ************************************ 00:05:08.037 END TEST env_mem_callbacks 00:05:08.037 ************************************ 00:05:08.037 16:24:45 -- common/autotest_common.sh@10 -- # set +x 00:05:08.037 00:05:08.037 real 0m2.465s 00:05:08.037 user 0m1.241s 00:05:08.037 sys 0m0.868s 00:05:08.037 16:24:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:08.037 16:24:45 -- common/autotest_common.sh@10 -- # set +x 00:05:08.037 ************************************ 00:05:08.037 END TEST env 00:05:08.037 ************************************ 00:05:08.037 16:24:45 -- spdk/autotest.sh@163 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:08.037 16:24:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:08.037 16:24:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:08.037 16:24:45 -- common/autotest_common.sh@10 -- # set +x 00:05:08.037 ************************************ 00:05:08.037 START TEST rpc 00:05:08.037 ************************************ 00:05:08.037 16:24:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:08.037 * Looking for test storage... 00:05:08.296 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:08.296 16:24:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:08.296 16:24:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:08.296 16:24:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:08.296 16:24:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:08.296 16:24:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:08.296 16:24:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:08.296 16:24:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:08.296 16:24:45 -- scripts/common.sh@335 -- # IFS=.-: 00:05:08.296 16:24:45 -- scripts/common.sh@335 -- # read -ra ver1 00:05:08.296 16:24:45 -- scripts/common.sh@336 -- # IFS=.-: 00:05:08.296 16:24:45 -- scripts/common.sh@336 -- # read -ra ver2 00:05:08.296 16:24:45 -- scripts/common.sh@337 -- # local 'op=<' 00:05:08.296 16:24:45 -- scripts/common.sh@339 -- # ver1_l=2 00:05:08.296 16:24:45 -- scripts/common.sh@340 -- # ver2_l=1 00:05:08.296 16:24:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:08.296 16:24:45 -- scripts/common.sh@343 -- # case "$op" in 00:05:08.296 16:24:45 -- scripts/common.sh@344 -- # : 1 00:05:08.296 16:24:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:08.296 16:24:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:08.296 16:24:45 -- scripts/common.sh@364 -- # decimal 1 00:05:08.296 16:24:45 -- scripts/common.sh@352 -- # local d=1 00:05:08.296 16:24:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:08.296 16:24:45 -- scripts/common.sh@354 -- # echo 1 00:05:08.296 16:24:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:08.296 16:24:45 -- scripts/common.sh@365 -- # decimal 2 00:05:08.296 16:24:45 -- scripts/common.sh@352 -- # local d=2 00:05:08.296 16:24:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:08.296 16:24:45 -- scripts/common.sh@354 -- # echo 2 00:05:08.296 16:24:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:08.296 16:24:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:08.296 16:24:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:08.296 16:24:45 -- scripts/common.sh@367 -- # return 0 00:05:08.296 16:24:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:08.296 16:24:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:08.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.296 --rc genhtml_branch_coverage=1 00:05:08.296 --rc genhtml_function_coverage=1 00:05:08.296 --rc genhtml_legend=1 00:05:08.296 --rc geninfo_all_blocks=1 00:05:08.296 --rc geninfo_unexecuted_blocks=1 00:05:08.296 00:05:08.296 ' 00:05:08.296 16:24:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:08.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.296 --rc genhtml_branch_coverage=1 00:05:08.296 --rc genhtml_function_coverage=1 00:05:08.296 --rc genhtml_legend=1 00:05:08.296 --rc geninfo_all_blocks=1 00:05:08.296 --rc geninfo_unexecuted_blocks=1 00:05:08.296 00:05:08.296 ' 00:05:08.296 16:24:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:08.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.296 --rc genhtml_branch_coverage=1 00:05:08.296 --rc genhtml_function_coverage=1 00:05:08.296 --rc genhtml_legend=1 00:05:08.296 --rc geninfo_all_blocks=1 00:05:08.296 --rc geninfo_unexecuted_blocks=1 00:05:08.296 00:05:08.296 ' 00:05:08.296 16:24:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:08.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.296 --rc genhtml_branch_coverage=1 00:05:08.296 --rc genhtml_function_coverage=1 00:05:08.296 --rc genhtml_legend=1 00:05:08.296 --rc geninfo_all_blocks=1 00:05:08.296 --rc geninfo_unexecuted_blocks=1 00:05:08.296 00:05:08.296 ' 00:05:08.296 16:24:45 -- rpc/rpc.sh@65 -- # spdk_pid=67592 00:05:08.296 16:24:45 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:08.296 16:24:45 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:08.296 16:24:45 -- rpc/rpc.sh@67 -- # waitforlisten 67592 00:05:08.296 16:24:45 -- common/autotest_common.sh@829 -- # '[' -z 67592 ']' 00:05:08.296 16:24:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.296 16:24:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:08.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.296 16:24:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.296 16:24:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:08.296 16:24:45 -- common/autotest_common.sh@10 -- # set +x 00:05:08.296 [2024-11-16 16:24:45.699178] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:08.296 [2024-11-16 16:24:45.699275] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67592 ] 00:05:08.556 [2024-11-16 16:24:45.839534] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.556 [2024-11-16 16:24:45.898038] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:08.556 [2024-11-16 16:24:45.898203] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:08.556 [2024-11-16 16:24:45.898216] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 67592' to capture a snapshot of events at runtime. 00:05:08.556 [2024-11-16 16:24:45.898224] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid67592 for offline analysis/debug. 00:05:08.556 [2024-11-16 16:24:45.898255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.493 16:24:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:09.493 16:24:46 -- common/autotest_common.sh@862 -- # return 0 00:05:09.493 16:24:46 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:09.493 16:24:46 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:09.493 16:24:46 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:09.493 16:24:46 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:09.493 16:24:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:09.493 16:24:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:09.493 16:24:46 -- common/autotest_common.sh@10 -- # set +x 00:05:09.493 ************************************ 00:05:09.493 START TEST rpc_integrity 00:05:09.493 ************************************ 00:05:09.493 16:24:46 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:05:09.493 16:24:46 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:09.493 16:24:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.493 16:24:46 -- common/autotest_common.sh@10 -- # set +x 00:05:09.493 16:24:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.493 16:24:46 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:09.493 16:24:46 -- rpc/rpc.sh@13 -- # jq length 00:05:09.493 16:24:46 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:09.493 16:24:46 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:09.493 16:24:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.493 16:24:46 -- common/autotest_common.sh@10 -- # set +x 00:05:09.493 16:24:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.493 16:24:46 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:09.493 16:24:46 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:09.493 16:24:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.493 16:24:46 -- common/autotest_common.sh@10 -- # set +x 00:05:09.493 16:24:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.493 16:24:46 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:09.493 { 00:05:09.493 "aliases": [ 00:05:09.493 "a559e48c-2335-4c01-9a10-5bfc71ed20ae" 00:05:09.493 ], 00:05:09.493 "assigned_rate_limits": { 00:05:09.493 "r_mbytes_per_sec": 0, 00:05:09.493 "rw_ios_per_sec": 0, 00:05:09.493 "rw_mbytes_per_sec": 0, 00:05:09.493 "w_mbytes_per_sec": 0 00:05:09.493 }, 00:05:09.493 "block_size": 512, 00:05:09.493 "claimed": false, 00:05:09.493 "driver_specific": {}, 00:05:09.493 "memory_domains": [ 00:05:09.493 { 00:05:09.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:09.493 "dma_device_type": 2 00:05:09.493 } 00:05:09.493 ], 00:05:09.493 "name": "Malloc0", 00:05:09.493 "num_blocks": 16384, 00:05:09.493 "product_name": "Malloc disk", 00:05:09.493 "supported_io_types": { 00:05:09.493 "abort": true, 00:05:09.493 "compare": false, 00:05:09.493 "compare_and_write": false, 00:05:09.493 "flush": true, 00:05:09.493 "nvme_admin": false, 00:05:09.493 "nvme_io": false, 00:05:09.493 "read": true, 00:05:09.493 "reset": true, 00:05:09.493 "unmap": true, 00:05:09.493 "write": true, 00:05:09.493 "write_zeroes": true 00:05:09.493 }, 00:05:09.493 "uuid": "a559e48c-2335-4c01-9a10-5bfc71ed20ae", 00:05:09.493 "zoned": false 00:05:09.493 } 00:05:09.493 ]' 00:05:09.493 16:24:46 -- rpc/rpc.sh@17 -- # jq length 00:05:09.493 16:24:46 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:09.493 16:24:46 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:09.493 16:24:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.493 16:24:46 -- common/autotest_common.sh@10 -- # set +x 00:05:09.493 [2024-11-16 16:24:46.845736] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:09.493 [2024-11-16 16:24:46.845786] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:09.493 [2024-11-16 16:24:46.845801] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb37b60 00:05:09.493 [2024-11-16 16:24:46.845808] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:09.493 [2024-11-16 16:24:46.847173] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:09.493 [2024-11-16 16:24:46.847203] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:09.493 Passthru0 00:05:09.493 16:24:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.493 16:24:46 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:09.493 16:24:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.493 16:24:46 -- common/autotest_common.sh@10 -- # set +x 00:05:09.493 16:24:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.493 16:24:46 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:09.493 { 00:05:09.493 "aliases": [ 00:05:09.493 "a559e48c-2335-4c01-9a10-5bfc71ed20ae" 00:05:09.493 ], 00:05:09.493 "assigned_rate_limits": { 00:05:09.493 "r_mbytes_per_sec": 0, 00:05:09.493 "rw_ios_per_sec": 0, 00:05:09.493 "rw_mbytes_per_sec": 0, 00:05:09.493 "w_mbytes_per_sec": 0 00:05:09.493 }, 00:05:09.493 "block_size": 512, 00:05:09.493 "claim_type": "exclusive_write", 00:05:09.493 "claimed": true, 00:05:09.493 "driver_specific": {}, 00:05:09.493 "memory_domains": [ 00:05:09.493 { 00:05:09.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:09.493 "dma_device_type": 2 00:05:09.493 } 00:05:09.493 ], 00:05:09.493 "name": "Malloc0", 00:05:09.493 "num_blocks": 16384, 00:05:09.493 "product_name": "Malloc disk", 00:05:09.493 "supported_io_types": { 00:05:09.493 "abort": true, 00:05:09.493 "compare": false, 00:05:09.493 "compare_and_write": false, 00:05:09.493 "flush": true, 00:05:09.493 "nvme_admin": false, 00:05:09.493 "nvme_io": false, 00:05:09.493 "read": true, 00:05:09.493 "reset": true, 00:05:09.493 "unmap": true, 00:05:09.493 "write": true, 00:05:09.493 "write_zeroes": true 00:05:09.493 }, 00:05:09.493 "uuid": "a559e48c-2335-4c01-9a10-5bfc71ed20ae", 00:05:09.493 "zoned": false 00:05:09.493 }, 00:05:09.493 { 00:05:09.493 "aliases": [ 00:05:09.493 "008867a8-4a81-5100-b864-222fc4fb58ab" 00:05:09.493 ], 00:05:09.493 "assigned_rate_limits": { 00:05:09.493 "r_mbytes_per_sec": 0, 00:05:09.493 "rw_ios_per_sec": 0, 00:05:09.493 "rw_mbytes_per_sec": 0, 00:05:09.493 "w_mbytes_per_sec": 0 00:05:09.493 }, 00:05:09.493 "block_size": 512, 00:05:09.493 "claimed": false, 00:05:09.493 "driver_specific": { 00:05:09.493 "passthru": { 00:05:09.493 "base_bdev_name": "Malloc0", 00:05:09.493 "name": "Passthru0" 00:05:09.493 } 00:05:09.493 }, 00:05:09.493 "memory_domains": [ 00:05:09.493 { 00:05:09.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:09.493 "dma_device_type": 2 00:05:09.493 } 00:05:09.493 ], 00:05:09.493 "name": "Passthru0", 00:05:09.493 "num_blocks": 16384, 00:05:09.493 "product_name": "passthru", 00:05:09.493 "supported_io_types": { 00:05:09.493 "abort": true, 00:05:09.493 "compare": false, 00:05:09.493 "compare_and_write": false, 00:05:09.493 "flush": true, 00:05:09.493 "nvme_admin": false, 00:05:09.493 "nvme_io": false, 00:05:09.493 "read": true, 00:05:09.493 "reset": true, 00:05:09.493 "unmap": true, 00:05:09.493 "write": true, 00:05:09.493 "write_zeroes": true 00:05:09.493 }, 00:05:09.493 "uuid": "008867a8-4a81-5100-b864-222fc4fb58ab", 00:05:09.493 "zoned": false 00:05:09.493 } 00:05:09.493 ]' 00:05:09.493 16:24:46 -- rpc/rpc.sh@21 -- # jq length 00:05:09.493 16:24:46 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:09.493 16:24:46 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:09.493 16:24:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.493 16:24:46 -- common/autotest_common.sh@10 -- # set +x 00:05:09.493 16:24:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.493 16:24:46 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:09.493 16:24:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.493 16:24:46 -- common/autotest_common.sh@10 -- # set +x 00:05:09.493 16:24:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.493 16:24:46 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:09.493 16:24:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.493 16:24:46 -- common/autotest_common.sh@10 -- # set +x 00:05:09.493 16:24:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.493 16:24:46 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:09.493 16:24:46 -- rpc/rpc.sh@26 -- # jq length 00:05:09.752 16:24:47 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:09.752 00:05:09.752 real 0m0.323s 00:05:09.752 user 0m0.214s 00:05:09.752 sys 0m0.033s 00:05:09.752 16:24:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:09.752 ************************************ 00:05:09.752 END TEST rpc_integrity 00:05:09.752 16:24:47 -- common/autotest_common.sh@10 -- # set +x 00:05:09.752 ************************************ 00:05:09.752 16:24:47 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:09.752 16:24:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:09.752 16:24:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:09.752 16:24:47 -- common/autotest_common.sh@10 -- # set +x 00:05:09.752 ************************************ 00:05:09.752 START TEST rpc_plugins 00:05:09.752 ************************************ 00:05:09.752 16:24:47 -- common/autotest_common.sh@1114 -- # rpc_plugins 00:05:09.752 16:24:47 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:09.752 16:24:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.752 16:24:47 -- common/autotest_common.sh@10 -- # set +x 00:05:09.752 16:24:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.752 16:24:47 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:09.752 16:24:47 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:09.752 16:24:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.752 16:24:47 -- common/autotest_common.sh@10 -- # set +x 00:05:09.752 16:24:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.752 16:24:47 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:09.752 { 00:05:09.752 "aliases": [ 00:05:09.752 "0f62d07a-4f32-448a-933e-59df20385ae3" 00:05:09.752 ], 00:05:09.752 "assigned_rate_limits": { 00:05:09.752 "r_mbytes_per_sec": 0, 00:05:09.752 "rw_ios_per_sec": 0, 00:05:09.752 "rw_mbytes_per_sec": 0, 00:05:09.752 "w_mbytes_per_sec": 0 00:05:09.752 }, 00:05:09.753 "block_size": 4096, 00:05:09.753 "claimed": false, 00:05:09.753 "driver_specific": {}, 00:05:09.753 "memory_domains": [ 00:05:09.753 { 00:05:09.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:09.753 "dma_device_type": 2 00:05:09.753 } 00:05:09.753 ], 00:05:09.753 "name": "Malloc1", 00:05:09.753 "num_blocks": 256, 00:05:09.753 "product_name": "Malloc disk", 00:05:09.753 "supported_io_types": { 00:05:09.753 "abort": true, 00:05:09.753 "compare": false, 00:05:09.753 "compare_and_write": false, 00:05:09.753 "flush": true, 00:05:09.753 "nvme_admin": false, 00:05:09.753 "nvme_io": false, 00:05:09.753 "read": true, 00:05:09.753 "reset": true, 00:05:09.753 "unmap": true, 00:05:09.753 "write": true, 00:05:09.753 "write_zeroes": true 00:05:09.753 }, 00:05:09.753 "uuid": "0f62d07a-4f32-448a-933e-59df20385ae3", 00:05:09.753 "zoned": false 00:05:09.753 } 00:05:09.753 ]' 00:05:09.753 16:24:47 -- rpc/rpc.sh@32 -- # jq length 00:05:09.753 16:24:47 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:09.753 16:24:47 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:09.753 16:24:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.753 16:24:47 -- common/autotest_common.sh@10 -- # set +x 00:05:09.753 16:24:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.753 16:24:47 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:09.753 16:24:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.753 16:24:47 -- common/autotest_common.sh@10 -- # set +x 00:05:09.753 16:24:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.753 16:24:47 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:09.753 16:24:47 -- rpc/rpc.sh@36 -- # jq length 00:05:09.753 16:24:47 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:09.753 00:05:09.753 real 0m0.161s 00:05:09.753 user 0m0.108s 00:05:09.753 sys 0m0.018s 00:05:09.753 16:24:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:09.753 ************************************ 00:05:09.753 END TEST rpc_plugins 00:05:09.753 16:24:47 -- common/autotest_common.sh@10 -- # set +x 00:05:09.753 ************************************ 00:05:10.012 16:24:47 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:10.012 16:24:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:10.012 16:24:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:10.012 16:24:47 -- common/autotest_common.sh@10 -- # set +x 00:05:10.012 ************************************ 00:05:10.012 START TEST rpc_trace_cmd_test 00:05:10.012 ************************************ 00:05:10.012 16:24:47 -- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test 00:05:10.012 16:24:47 -- rpc/rpc.sh@40 -- # local info 00:05:10.012 16:24:47 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:10.012 16:24:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.012 16:24:47 -- common/autotest_common.sh@10 -- # set +x 00:05:10.012 16:24:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.012 16:24:47 -- rpc/rpc.sh@42 -- # info='{ 00:05:10.012 "bdev": { 00:05:10.012 "mask": "0x8", 00:05:10.012 "tpoint_mask": "0xffffffffffffffff" 00:05:10.012 }, 00:05:10.012 "bdev_nvme": { 00:05:10.012 "mask": "0x4000", 00:05:10.012 "tpoint_mask": "0x0" 00:05:10.012 }, 00:05:10.012 "blobfs": { 00:05:10.012 "mask": "0x80", 00:05:10.012 "tpoint_mask": "0x0" 00:05:10.012 }, 00:05:10.012 "dsa": { 00:05:10.012 "mask": "0x200", 00:05:10.012 "tpoint_mask": "0x0" 00:05:10.012 }, 00:05:10.012 "ftl": { 00:05:10.012 "mask": "0x40", 00:05:10.012 "tpoint_mask": "0x0" 00:05:10.012 }, 00:05:10.012 "iaa": { 00:05:10.012 "mask": "0x1000", 00:05:10.012 "tpoint_mask": "0x0" 00:05:10.012 }, 00:05:10.012 "iscsi_conn": { 00:05:10.012 "mask": "0x2", 00:05:10.012 "tpoint_mask": "0x0" 00:05:10.012 }, 00:05:10.012 "nvme_pcie": { 00:05:10.012 "mask": "0x800", 00:05:10.012 "tpoint_mask": "0x0" 00:05:10.012 }, 00:05:10.012 "nvme_tcp": { 00:05:10.012 "mask": "0x2000", 00:05:10.012 "tpoint_mask": "0x0" 00:05:10.012 }, 00:05:10.012 "nvmf_rdma": { 00:05:10.012 "mask": "0x10", 00:05:10.012 "tpoint_mask": "0x0" 00:05:10.012 }, 00:05:10.012 "nvmf_tcp": { 00:05:10.012 "mask": "0x20", 00:05:10.012 "tpoint_mask": "0x0" 00:05:10.012 }, 00:05:10.012 "scsi": { 00:05:10.012 "mask": "0x4", 00:05:10.012 "tpoint_mask": "0x0" 00:05:10.012 }, 00:05:10.012 "thread": { 00:05:10.012 "mask": "0x400", 00:05:10.012 "tpoint_mask": "0x0" 00:05:10.012 }, 00:05:10.012 "tpoint_group_mask": "0x8", 00:05:10.012 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid67592" 00:05:10.012 }' 00:05:10.012 16:24:47 -- rpc/rpc.sh@43 -- # jq length 00:05:10.012 16:24:47 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:05:10.012 16:24:47 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:10.012 16:24:47 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:10.012 16:24:47 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:10.012 16:24:47 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:10.012 16:24:47 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:10.272 16:24:47 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:10.272 16:24:47 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:10.272 16:24:47 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:10.272 00:05:10.272 real 0m0.285s 00:05:10.272 user 0m0.242s 00:05:10.272 sys 0m0.031s 00:05:10.272 16:24:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:10.272 ************************************ 00:05:10.272 16:24:47 -- common/autotest_common.sh@10 -- # set +x 00:05:10.272 END TEST rpc_trace_cmd_test 00:05:10.272 ************************************ 00:05:10.272 16:24:47 -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:05:10.272 16:24:47 -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:05:10.272 16:24:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:10.272 16:24:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:10.272 16:24:47 -- common/autotest_common.sh@10 -- # set +x 00:05:10.272 ************************************ 00:05:10.272 START TEST go_rpc 00:05:10.272 ************************************ 00:05:10.272 16:24:47 -- common/autotest_common.sh@1114 -- # go_rpc 00:05:10.272 16:24:47 -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:10.272 16:24:47 -- rpc/rpc.sh@51 -- # bdevs='[]' 00:05:10.272 16:24:47 -- rpc/rpc.sh@52 -- # jq length 00:05:10.272 16:24:47 -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:05:10.272 16:24:47 -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:05:10.272 16:24:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.272 16:24:47 -- common/autotest_common.sh@10 -- # set +x 00:05:10.272 16:24:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.272 16:24:47 -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:05:10.272 16:24:47 -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:10.272 16:24:47 -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["221be2b8-de0f-4a17-b276-1c4e1ab864e2"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"flush":true,"nvme_admin":false,"nvme_io":false,"read":true,"reset":true,"unmap":true,"write":true,"write_zeroes":true},"uuid":"221be2b8-de0f-4a17-b276-1c4e1ab864e2","zoned":false}]' 00:05:10.272 16:24:47 -- rpc/rpc.sh@57 -- # jq length 00:05:10.531 16:24:47 -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:05:10.531 16:24:47 -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:10.531 16:24:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.531 16:24:47 -- common/autotest_common.sh@10 -- # set +x 00:05:10.531 16:24:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.531 16:24:47 -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:10.531 16:24:47 -- rpc/rpc.sh@60 -- # bdevs='[]' 00:05:10.531 16:24:47 -- rpc/rpc.sh@61 -- # jq length 00:05:10.531 16:24:47 -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:05:10.531 00:05:10.531 real 0m0.227s 00:05:10.531 user 0m0.152s 00:05:10.531 sys 0m0.037s 00:05:10.531 16:24:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:10.531 16:24:47 -- common/autotest_common.sh@10 -- # set +x 00:05:10.531 ************************************ 00:05:10.531 END TEST go_rpc 00:05:10.531 ************************************ 00:05:10.531 16:24:47 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:10.531 16:24:47 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:10.531 16:24:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:10.531 16:24:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:10.531 16:24:47 -- common/autotest_common.sh@10 -- # set +x 00:05:10.531 ************************************ 00:05:10.531 START TEST rpc_daemon_integrity 00:05:10.531 ************************************ 00:05:10.531 16:24:47 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:05:10.531 16:24:47 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:10.531 16:24:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.531 16:24:47 -- common/autotest_common.sh@10 -- # set +x 00:05:10.531 16:24:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.531 16:24:47 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:10.531 16:24:47 -- rpc/rpc.sh@13 -- # jq length 00:05:10.531 16:24:47 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:10.531 16:24:47 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:10.531 16:24:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.531 16:24:47 -- common/autotest_common.sh@10 -- # set +x 00:05:10.531 16:24:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.531 16:24:47 -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:05:10.531 16:24:47 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:10.531 16:24:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.531 16:24:47 -- common/autotest_common.sh@10 -- # set +x 00:05:10.531 16:24:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.531 16:24:47 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:10.531 { 00:05:10.531 "aliases": [ 00:05:10.531 "39e93425-a04e-461d-a182-f73083843fd5" 00:05:10.531 ], 00:05:10.531 "assigned_rate_limits": { 00:05:10.531 "r_mbytes_per_sec": 0, 00:05:10.531 "rw_ios_per_sec": 0, 00:05:10.531 "rw_mbytes_per_sec": 0, 00:05:10.531 "w_mbytes_per_sec": 0 00:05:10.531 }, 00:05:10.531 "block_size": 512, 00:05:10.531 "claimed": false, 00:05:10.531 "driver_specific": {}, 00:05:10.531 "memory_domains": [ 00:05:10.531 { 00:05:10.531 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:10.531 "dma_device_type": 2 00:05:10.531 } 00:05:10.531 ], 00:05:10.531 "name": "Malloc3", 00:05:10.531 "num_blocks": 16384, 00:05:10.531 "product_name": "Malloc disk", 00:05:10.531 "supported_io_types": { 00:05:10.531 "abort": true, 00:05:10.531 "compare": false, 00:05:10.531 "compare_and_write": false, 00:05:10.531 "flush": true, 00:05:10.531 "nvme_admin": false, 00:05:10.531 "nvme_io": false, 00:05:10.531 "read": true, 00:05:10.531 "reset": true, 00:05:10.531 "unmap": true, 00:05:10.531 "write": true, 00:05:10.531 "write_zeroes": true 00:05:10.531 }, 00:05:10.531 "uuid": "39e93425-a04e-461d-a182-f73083843fd5", 00:05:10.531 "zoned": false 00:05:10.531 } 00:05:10.531 ]' 00:05:10.531 16:24:47 -- rpc/rpc.sh@17 -- # jq length 00:05:10.790 16:24:48 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:10.790 16:24:48 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:05:10.790 16:24:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.790 16:24:48 -- common/autotest_common.sh@10 -- # set +x 00:05:10.790 [2024-11-16 16:24:48.046348] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:10.790 [2024-11-16 16:24:48.046400] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:10.790 [2024-11-16 16:24:48.046430] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb39990 00:05:10.790 [2024-11-16 16:24:48.046453] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:10.790 [2024-11-16 16:24:48.047650] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:10.790 [2024-11-16 16:24:48.047676] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:10.790 Passthru0 00:05:10.791 16:24:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.791 16:24:48 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:10.791 16:24:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.791 16:24:48 -- common/autotest_common.sh@10 -- # set +x 00:05:10.791 16:24:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.791 16:24:48 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:10.791 { 00:05:10.791 "aliases": [ 00:05:10.791 "39e93425-a04e-461d-a182-f73083843fd5" 00:05:10.791 ], 00:05:10.791 "assigned_rate_limits": { 00:05:10.791 "r_mbytes_per_sec": 0, 00:05:10.791 "rw_ios_per_sec": 0, 00:05:10.791 "rw_mbytes_per_sec": 0, 00:05:10.791 "w_mbytes_per_sec": 0 00:05:10.791 }, 00:05:10.791 "block_size": 512, 00:05:10.791 "claim_type": "exclusive_write", 00:05:10.791 "claimed": true, 00:05:10.791 "driver_specific": {}, 00:05:10.791 "memory_domains": [ 00:05:10.791 { 00:05:10.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:10.791 "dma_device_type": 2 00:05:10.791 } 00:05:10.791 ], 00:05:10.791 "name": "Malloc3", 00:05:10.791 "num_blocks": 16384, 00:05:10.791 "product_name": "Malloc disk", 00:05:10.791 "supported_io_types": { 00:05:10.791 "abort": true, 00:05:10.791 "compare": false, 00:05:10.791 "compare_and_write": false, 00:05:10.791 "flush": true, 00:05:10.791 "nvme_admin": false, 00:05:10.791 "nvme_io": false, 00:05:10.791 "read": true, 00:05:10.791 "reset": true, 00:05:10.791 "unmap": true, 00:05:10.791 "write": true, 00:05:10.791 "write_zeroes": true 00:05:10.791 }, 00:05:10.791 "uuid": "39e93425-a04e-461d-a182-f73083843fd5", 00:05:10.791 "zoned": false 00:05:10.791 }, 00:05:10.791 { 00:05:10.791 "aliases": [ 00:05:10.791 "83c15f29-d612-5b6c-87ad-4a742aaa3eef" 00:05:10.791 ], 00:05:10.791 "assigned_rate_limits": { 00:05:10.791 "r_mbytes_per_sec": 0, 00:05:10.791 "rw_ios_per_sec": 0, 00:05:10.791 "rw_mbytes_per_sec": 0, 00:05:10.791 "w_mbytes_per_sec": 0 00:05:10.791 }, 00:05:10.791 "block_size": 512, 00:05:10.791 "claimed": false, 00:05:10.791 "driver_specific": { 00:05:10.791 "passthru": { 00:05:10.791 "base_bdev_name": "Malloc3", 00:05:10.791 "name": "Passthru0" 00:05:10.791 } 00:05:10.791 }, 00:05:10.791 "memory_domains": [ 00:05:10.791 { 00:05:10.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:10.791 "dma_device_type": 2 00:05:10.791 } 00:05:10.791 ], 00:05:10.791 "name": "Passthru0", 00:05:10.791 "num_blocks": 16384, 00:05:10.791 "product_name": "passthru", 00:05:10.791 "supported_io_types": { 00:05:10.791 "abort": true, 00:05:10.791 "compare": false, 00:05:10.791 "compare_and_write": false, 00:05:10.791 "flush": true, 00:05:10.791 "nvme_admin": false, 00:05:10.791 "nvme_io": false, 00:05:10.791 "read": true, 00:05:10.791 "reset": true, 00:05:10.791 "unmap": true, 00:05:10.791 "write": true, 00:05:10.791 "write_zeroes": true 00:05:10.791 }, 00:05:10.791 "uuid": "83c15f29-d612-5b6c-87ad-4a742aaa3eef", 00:05:10.791 "zoned": false 00:05:10.791 } 00:05:10.791 ]' 00:05:10.791 16:24:48 -- rpc/rpc.sh@21 -- # jq length 00:05:10.791 16:24:48 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:10.791 16:24:48 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:10.791 16:24:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.791 16:24:48 -- common/autotest_common.sh@10 -- # set +x 00:05:10.791 16:24:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.791 16:24:48 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:05:10.791 16:24:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.791 16:24:48 -- common/autotest_common.sh@10 -- # set +x 00:05:10.791 16:24:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.791 16:24:48 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:10.791 16:24:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.791 16:24:48 -- common/autotest_common.sh@10 -- # set +x 00:05:10.791 16:24:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.791 16:24:48 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:10.791 16:24:48 -- rpc/rpc.sh@26 -- # jq length 00:05:10.791 16:24:48 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:10.791 00:05:10.791 real 0m0.317s 00:05:10.791 user 0m0.213s 00:05:10.791 sys 0m0.033s 00:05:10.791 16:24:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:10.791 16:24:48 -- common/autotest_common.sh@10 -- # set +x 00:05:10.791 ************************************ 00:05:10.791 END TEST rpc_daemon_integrity 00:05:10.791 ************************************ 00:05:10.791 16:24:48 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:10.791 16:24:48 -- rpc/rpc.sh@84 -- # killprocess 67592 00:05:10.791 16:24:48 -- common/autotest_common.sh@936 -- # '[' -z 67592 ']' 00:05:10.791 16:24:48 -- common/autotest_common.sh@940 -- # kill -0 67592 00:05:10.791 16:24:48 -- common/autotest_common.sh@941 -- # uname 00:05:10.791 16:24:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:10.791 16:24:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67592 00:05:11.050 16:24:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:11.050 16:24:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:11.050 killing process with pid 67592 00:05:11.050 16:24:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67592' 00:05:11.050 16:24:48 -- common/autotest_common.sh@955 -- # kill 67592 00:05:11.050 16:24:48 -- common/autotest_common.sh@960 -- # wait 67592 00:05:11.309 00:05:11.309 real 0m3.175s 00:05:11.309 user 0m4.203s 00:05:11.309 sys 0m0.744s 00:05:11.309 16:24:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:11.309 16:24:48 -- common/autotest_common.sh@10 -- # set +x 00:05:11.309 ************************************ 00:05:11.309 END TEST rpc 00:05:11.309 ************************************ 00:05:11.309 16:24:48 -- spdk/autotest.sh@164 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:11.309 16:24:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:11.309 16:24:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:11.309 16:24:48 -- common/autotest_common.sh@10 -- # set +x 00:05:11.309 ************************************ 00:05:11.309 START TEST rpc_client 00:05:11.309 ************************************ 00:05:11.309 16:24:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:11.309 * Looking for test storage... 00:05:11.309 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:11.309 16:24:48 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:11.309 16:24:48 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:11.309 16:24:48 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:11.568 16:24:48 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:11.568 16:24:48 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:11.568 16:24:48 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:11.568 16:24:48 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:11.568 16:24:48 -- scripts/common.sh@335 -- # IFS=.-: 00:05:11.568 16:24:48 -- scripts/common.sh@335 -- # read -ra ver1 00:05:11.568 16:24:48 -- scripts/common.sh@336 -- # IFS=.-: 00:05:11.568 16:24:48 -- scripts/common.sh@336 -- # read -ra ver2 00:05:11.568 16:24:48 -- scripts/common.sh@337 -- # local 'op=<' 00:05:11.568 16:24:48 -- scripts/common.sh@339 -- # ver1_l=2 00:05:11.568 16:24:48 -- scripts/common.sh@340 -- # ver2_l=1 00:05:11.568 16:24:48 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:11.568 16:24:48 -- scripts/common.sh@343 -- # case "$op" in 00:05:11.568 16:24:48 -- scripts/common.sh@344 -- # : 1 00:05:11.568 16:24:48 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:11.568 16:24:48 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:11.568 16:24:48 -- scripts/common.sh@364 -- # decimal 1 00:05:11.568 16:24:48 -- scripts/common.sh@352 -- # local d=1 00:05:11.568 16:24:48 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:11.568 16:24:48 -- scripts/common.sh@354 -- # echo 1 00:05:11.568 16:24:48 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:11.568 16:24:48 -- scripts/common.sh@365 -- # decimal 2 00:05:11.568 16:24:48 -- scripts/common.sh@352 -- # local d=2 00:05:11.568 16:24:48 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:11.568 16:24:48 -- scripts/common.sh@354 -- # echo 2 00:05:11.568 16:24:48 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:11.568 16:24:48 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:11.568 16:24:48 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:11.568 16:24:48 -- scripts/common.sh@367 -- # return 0 00:05:11.568 16:24:48 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:11.568 16:24:48 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:11.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.568 --rc genhtml_branch_coverage=1 00:05:11.568 --rc genhtml_function_coverage=1 00:05:11.568 --rc genhtml_legend=1 00:05:11.568 --rc geninfo_all_blocks=1 00:05:11.568 --rc geninfo_unexecuted_blocks=1 00:05:11.568 00:05:11.568 ' 00:05:11.568 16:24:48 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:11.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.568 --rc genhtml_branch_coverage=1 00:05:11.568 --rc genhtml_function_coverage=1 00:05:11.568 --rc genhtml_legend=1 00:05:11.568 --rc geninfo_all_blocks=1 00:05:11.568 --rc geninfo_unexecuted_blocks=1 00:05:11.568 00:05:11.568 ' 00:05:11.568 16:24:48 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:11.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.568 --rc genhtml_branch_coverage=1 00:05:11.568 --rc genhtml_function_coverage=1 00:05:11.568 --rc genhtml_legend=1 00:05:11.568 --rc geninfo_all_blocks=1 00:05:11.568 --rc geninfo_unexecuted_blocks=1 00:05:11.568 00:05:11.568 ' 00:05:11.568 16:24:48 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:11.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.568 --rc genhtml_branch_coverage=1 00:05:11.568 --rc genhtml_function_coverage=1 00:05:11.568 --rc genhtml_legend=1 00:05:11.568 --rc geninfo_all_blocks=1 00:05:11.568 --rc geninfo_unexecuted_blocks=1 00:05:11.568 00:05:11.568 ' 00:05:11.568 16:24:48 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:11.568 OK 00:05:11.568 16:24:48 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:11.568 00:05:11.568 real 0m0.213s 00:05:11.568 user 0m0.135s 00:05:11.568 sys 0m0.089s 00:05:11.568 ************************************ 00:05:11.568 END TEST rpc_client 00:05:11.568 ************************************ 00:05:11.568 16:24:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:11.568 16:24:48 -- common/autotest_common.sh@10 -- # set +x 00:05:11.568 16:24:48 -- spdk/autotest.sh@165 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:11.568 16:24:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:11.568 16:24:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:11.568 16:24:48 -- common/autotest_common.sh@10 -- # set +x 00:05:11.568 ************************************ 00:05:11.568 START TEST json_config 00:05:11.568 ************************************ 00:05:11.568 16:24:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:11.568 16:24:48 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:11.568 16:24:48 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:11.568 16:24:48 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:11.838 16:24:49 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:11.838 16:24:49 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:11.838 16:24:49 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:11.838 16:24:49 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:11.838 16:24:49 -- scripts/common.sh@335 -- # IFS=.-: 00:05:11.838 16:24:49 -- scripts/common.sh@335 -- # read -ra ver1 00:05:11.838 16:24:49 -- scripts/common.sh@336 -- # IFS=.-: 00:05:11.838 16:24:49 -- scripts/common.sh@336 -- # read -ra ver2 00:05:11.838 16:24:49 -- scripts/common.sh@337 -- # local 'op=<' 00:05:11.838 16:24:49 -- scripts/common.sh@339 -- # ver1_l=2 00:05:11.838 16:24:49 -- scripts/common.sh@340 -- # ver2_l=1 00:05:11.838 16:24:49 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:11.838 16:24:49 -- scripts/common.sh@343 -- # case "$op" in 00:05:11.838 16:24:49 -- scripts/common.sh@344 -- # : 1 00:05:11.838 16:24:49 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:11.838 16:24:49 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:11.838 16:24:49 -- scripts/common.sh@364 -- # decimal 1 00:05:11.838 16:24:49 -- scripts/common.sh@352 -- # local d=1 00:05:11.838 16:24:49 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:11.838 16:24:49 -- scripts/common.sh@354 -- # echo 1 00:05:11.838 16:24:49 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:11.838 16:24:49 -- scripts/common.sh@365 -- # decimal 2 00:05:11.838 16:24:49 -- scripts/common.sh@352 -- # local d=2 00:05:11.838 16:24:49 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:11.838 16:24:49 -- scripts/common.sh@354 -- # echo 2 00:05:11.838 16:24:49 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:11.838 16:24:49 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:11.838 16:24:49 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:11.838 16:24:49 -- scripts/common.sh@367 -- # return 0 00:05:11.838 16:24:49 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:11.839 16:24:49 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:11.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.839 --rc genhtml_branch_coverage=1 00:05:11.839 --rc genhtml_function_coverage=1 00:05:11.839 --rc genhtml_legend=1 00:05:11.839 --rc geninfo_all_blocks=1 00:05:11.839 --rc geninfo_unexecuted_blocks=1 00:05:11.839 00:05:11.839 ' 00:05:11.839 16:24:49 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:11.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.839 --rc genhtml_branch_coverage=1 00:05:11.839 --rc genhtml_function_coverage=1 00:05:11.839 --rc genhtml_legend=1 00:05:11.839 --rc geninfo_all_blocks=1 00:05:11.839 --rc geninfo_unexecuted_blocks=1 00:05:11.839 00:05:11.839 ' 00:05:11.839 16:24:49 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:11.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.839 --rc genhtml_branch_coverage=1 00:05:11.839 --rc genhtml_function_coverage=1 00:05:11.839 --rc genhtml_legend=1 00:05:11.839 --rc geninfo_all_blocks=1 00:05:11.839 --rc geninfo_unexecuted_blocks=1 00:05:11.839 00:05:11.839 ' 00:05:11.839 16:24:49 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:11.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.839 --rc genhtml_branch_coverage=1 00:05:11.839 --rc genhtml_function_coverage=1 00:05:11.839 --rc genhtml_legend=1 00:05:11.839 --rc geninfo_all_blocks=1 00:05:11.840 --rc geninfo_unexecuted_blocks=1 00:05:11.840 00:05:11.840 ' 00:05:11.840 16:24:49 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:11.840 16:24:49 -- nvmf/common.sh@7 -- # uname -s 00:05:11.840 16:24:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:11.840 16:24:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:11.840 16:24:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:11.840 16:24:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:11.840 16:24:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:11.840 16:24:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:11.840 16:24:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:11.840 16:24:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:11.840 16:24:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:11.840 16:24:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:11.840 16:24:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:05:11.840 16:24:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:05:11.840 16:24:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:11.840 16:24:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:11.840 16:24:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:11.840 16:24:49 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:11.840 16:24:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:11.840 16:24:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:11.840 16:24:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:11.840 16:24:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:11.840 16:24:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:11.841 16:24:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:11.841 16:24:49 -- paths/export.sh@5 -- # export PATH 00:05:11.841 16:24:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:11.841 16:24:49 -- nvmf/common.sh@46 -- # : 0 00:05:11.841 16:24:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:11.841 16:24:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:11.841 16:24:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:11.841 16:24:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:11.841 16:24:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:11.841 16:24:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:11.841 16:24:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:11.841 16:24:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:11.841 16:24:49 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:05:11.841 16:24:49 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:05:11.841 16:24:49 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:05:11.841 16:24:49 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:11.841 16:24:49 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:05:11.841 16:24:49 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:05:11.841 16:24:49 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:11.841 16:24:49 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:05:11.841 16:24:49 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:11.841 16:24:49 -- json_config/json_config.sh@32 -- # declare -A app_params 00:05:11.841 16:24:49 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:11.841 16:24:49 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:05:11.841 16:24:49 -- json_config/json_config.sh@43 -- # last_event_id=0 00:05:11.841 16:24:49 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:11.841 16:24:49 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:05:11.841 INFO: JSON configuration test init 00:05:11.841 16:24:49 -- json_config/json_config.sh@420 -- # json_config_test_init 00:05:11.841 16:24:49 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:05:11.841 16:24:49 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:11.841 16:24:49 -- common/autotest_common.sh@10 -- # set +x 00:05:11.842 16:24:49 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:05:11.842 16:24:49 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:11.842 16:24:49 -- common/autotest_common.sh@10 -- # set +x 00:05:11.842 16:24:49 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:05:11.842 16:24:49 -- json_config/json_config.sh@98 -- # local app=target 00:05:11.842 16:24:49 -- json_config/json_config.sh@99 -- # shift 00:05:11.842 Waiting for target to run... 00:05:11.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:11.842 16:24:49 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:11.842 16:24:49 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:11.842 16:24:49 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:11.842 16:24:49 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:11.842 16:24:49 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:11.842 16:24:49 -- json_config/json_config.sh@111 -- # app_pid[$app]=67919 00:05:11.842 16:24:49 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:11.842 16:24:49 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:11.842 16:24:49 -- json_config/json_config.sh@114 -- # waitforlisten 67919 /var/tmp/spdk_tgt.sock 00:05:11.842 16:24:49 -- common/autotest_common.sh@829 -- # '[' -z 67919 ']' 00:05:11.842 16:24:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:11.842 16:24:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:11.842 16:24:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:11.842 16:24:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:11.842 16:24:49 -- common/autotest_common.sh@10 -- # set +x 00:05:11.842 [2024-11-16 16:24:49.163603] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:11.842 [2024-11-16 16:24:49.163850] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67919 ] 00:05:12.106 [2024-11-16 16:24:49.564364] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.365 [2024-11-16 16:24:49.610630] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:12.365 [2024-11-16 16:24:49.611005] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.939 16:24:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:12.939 16:24:50 -- common/autotest_common.sh@862 -- # return 0 00:05:12.939 16:24:50 -- json_config/json_config.sh@115 -- # echo '' 00:05:12.939 00:05:12.939 16:24:50 -- json_config/json_config.sh@322 -- # create_accel_config 00:05:12.939 16:24:50 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:05:12.939 16:24:50 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:12.939 16:24:50 -- common/autotest_common.sh@10 -- # set +x 00:05:12.939 16:24:50 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:05:12.939 16:24:50 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:05:12.939 16:24:50 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:12.939 16:24:50 -- common/autotest_common.sh@10 -- # set +x 00:05:12.939 16:24:50 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:12.939 16:24:50 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:05:12.939 16:24:50 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:13.198 16:24:50 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:05:13.198 16:24:50 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:05:13.198 16:24:50 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:13.198 16:24:50 -- common/autotest_common.sh@10 -- # set +x 00:05:13.198 16:24:50 -- json_config/json_config.sh@48 -- # local ret=0 00:05:13.198 16:24:50 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:13.198 16:24:50 -- json_config/json_config.sh@49 -- # local enabled_types 00:05:13.198 16:24:50 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:13.198 16:24:50 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:13.198 16:24:50 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:13.764 16:24:50 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:13.765 16:24:50 -- json_config/json_config.sh@51 -- # local get_types 00:05:13.765 16:24:50 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:13.765 16:24:50 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:05:13.765 16:24:50 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:13.765 16:24:50 -- common/autotest_common.sh@10 -- # set +x 00:05:13.765 16:24:51 -- json_config/json_config.sh@58 -- # return 0 00:05:13.765 16:24:51 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:05:13.765 16:24:51 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:05:13.765 16:24:51 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:05:13.765 16:24:51 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:05:13.765 16:24:51 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:05:13.765 16:24:51 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:05:13.765 16:24:51 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:13.765 16:24:51 -- common/autotest_common.sh@10 -- # set +x 00:05:13.765 16:24:51 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:13.765 16:24:51 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:05:13.765 16:24:51 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:05:13.765 16:24:51 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:13.765 16:24:51 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:13.765 MallocForNvmf0 00:05:13.765 16:24:51 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:13.765 16:24:51 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:14.022 MallocForNvmf1 00:05:14.022 16:24:51 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:14.022 16:24:51 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:14.280 [2024-11-16 16:24:51.621600] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:14.280 16:24:51 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:14.280 16:24:51 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:14.539 16:24:51 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:14.539 16:24:51 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:14.797 16:24:52 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:14.797 16:24:52 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:15.056 16:24:52 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:15.056 16:24:52 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:15.315 [2024-11-16 16:24:52.626085] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:15.315 16:24:52 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:05:15.315 16:24:52 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:15.315 16:24:52 -- common/autotest_common.sh@10 -- # set +x 00:05:15.315 16:24:52 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:05:15.315 16:24:52 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:15.315 16:24:52 -- common/autotest_common.sh@10 -- # set +x 00:05:15.315 16:24:52 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:05:15.315 16:24:52 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:15.315 16:24:52 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:15.574 MallocBdevForConfigChangeCheck 00:05:15.574 16:24:52 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:05:15.574 16:24:52 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:15.574 16:24:52 -- common/autotest_common.sh@10 -- # set +x 00:05:15.574 16:24:52 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:05:15.574 16:24:52 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:16.141 INFO: shutting down applications... 00:05:16.141 16:24:53 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:05:16.141 16:24:53 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:05:16.141 16:24:53 -- json_config/json_config.sh@431 -- # json_config_clear target 00:05:16.141 16:24:53 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:05:16.141 16:24:53 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:16.400 Calling clear_iscsi_subsystem 00:05:16.400 Calling clear_nvmf_subsystem 00:05:16.400 Calling clear_nbd_subsystem 00:05:16.400 Calling clear_ublk_subsystem 00:05:16.400 Calling clear_vhost_blk_subsystem 00:05:16.400 Calling clear_vhost_scsi_subsystem 00:05:16.400 Calling clear_scheduler_subsystem 00:05:16.400 Calling clear_bdev_subsystem 00:05:16.400 Calling clear_accel_subsystem 00:05:16.400 Calling clear_vmd_subsystem 00:05:16.400 Calling clear_sock_subsystem 00:05:16.400 Calling clear_iobuf_subsystem 00:05:16.400 16:24:53 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:16.400 16:24:53 -- json_config/json_config.sh@396 -- # count=100 00:05:16.400 16:24:53 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:05:16.400 16:24:53 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:16.400 16:24:53 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:16.400 16:24:53 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:16.659 16:24:54 -- json_config/json_config.sh@398 -- # break 00:05:16.659 16:24:54 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:05:16.659 16:24:54 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:05:16.659 16:24:54 -- json_config/json_config.sh@120 -- # local app=target 00:05:16.659 16:24:54 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:05:16.659 16:24:54 -- json_config/json_config.sh@124 -- # [[ -n 67919 ]] 00:05:16.659 16:24:54 -- json_config/json_config.sh@127 -- # kill -SIGINT 67919 00:05:16.659 16:24:54 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:05:16.659 16:24:54 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:16.659 16:24:54 -- json_config/json_config.sh@130 -- # kill -0 67919 00:05:16.659 16:24:54 -- json_config/json_config.sh@134 -- # sleep 0.5 00:05:17.227 16:24:54 -- json_config/json_config.sh@129 -- # (( i++ )) 00:05:17.227 16:24:54 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:17.227 16:24:54 -- json_config/json_config.sh@130 -- # kill -0 67919 00:05:17.227 16:24:54 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:05:17.227 16:24:54 -- json_config/json_config.sh@132 -- # break 00:05:17.227 16:24:54 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:05:17.227 SPDK target shutdown done 00:05:17.227 16:24:54 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:05:17.227 INFO: relaunching applications... 00:05:17.227 16:24:54 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:05:17.227 16:24:54 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:17.227 16:24:54 -- json_config/json_config.sh@98 -- # local app=target 00:05:17.227 16:24:54 -- json_config/json_config.sh@99 -- # shift 00:05:17.227 16:24:54 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:17.227 16:24:54 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:17.227 16:24:54 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:17.227 16:24:54 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:17.227 16:24:54 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:17.227 16:24:54 -- json_config/json_config.sh@111 -- # app_pid[$app]=68188 00:05:17.227 16:24:54 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:17.227 Waiting for target to run... 00:05:17.227 16:24:54 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:17.227 16:24:54 -- json_config/json_config.sh@114 -- # waitforlisten 68188 /var/tmp/spdk_tgt.sock 00:05:17.227 16:24:54 -- common/autotest_common.sh@829 -- # '[' -z 68188 ']' 00:05:17.227 16:24:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:17.227 16:24:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:17.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:17.227 16:24:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:17.227 16:24:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:17.227 16:24:54 -- common/autotest_common.sh@10 -- # set +x 00:05:17.227 [2024-11-16 16:24:54.631624] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:17.227 [2024-11-16 16:24:54.631738] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68188 ] 00:05:17.795 [2024-11-16 16:24:55.064997] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.795 [2024-11-16 16:24:55.115536] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:17.795 [2024-11-16 16:24:55.115722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.054 [2024-11-16 16:24:55.413257] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:18.054 [2024-11-16 16:24:55.445333] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:18.313 16:24:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:18.313 16:24:55 -- common/autotest_common.sh@862 -- # return 0 00:05:18.313 00:05:18.313 16:24:55 -- json_config/json_config.sh@115 -- # echo '' 00:05:18.313 16:24:55 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:05:18.313 INFO: Checking if target configuration is the same... 00:05:18.313 16:24:55 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:18.313 16:24:55 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:05:18.313 16:24:55 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:18.313 16:24:55 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:18.313 + '[' 2 -ne 2 ']' 00:05:18.313 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:18.313 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:18.313 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:18.313 +++ basename /dev/fd/62 00:05:18.313 ++ mktemp /tmp/62.XXX 00:05:18.313 + tmp_file_1=/tmp/62.zzU 00:05:18.313 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:18.313 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:18.313 + tmp_file_2=/tmp/spdk_tgt_config.json.8cP 00:05:18.313 + ret=0 00:05:18.313 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:18.572 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:18.572 + diff -u /tmp/62.zzU /tmp/spdk_tgt_config.json.8cP 00:05:18.572 INFO: JSON config files are the same 00:05:18.572 + echo 'INFO: JSON config files are the same' 00:05:18.572 + rm /tmp/62.zzU /tmp/spdk_tgt_config.json.8cP 00:05:18.572 + exit 0 00:05:18.572 16:24:56 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:05:18.572 INFO: changing configuration and checking if this can be detected... 00:05:18.572 16:24:56 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:18.572 16:24:56 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:18.572 16:24:56 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:18.830 16:24:56 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:18.830 16:24:56 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:05:18.830 16:24:56 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:18.830 + '[' 2 -ne 2 ']' 00:05:18.830 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:18.830 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:18.830 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:18.830 +++ basename /dev/fd/62 00:05:18.830 ++ mktemp /tmp/62.XXX 00:05:18.830 + tmp_file_1=/tmp/62.Xnl 00:05:18.830 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:18.830 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:18.830 + tmp_file_2=/tmp/spdk_tgt_config.json.zXc 00:05:18.830 + ret=0 00:05:18.830 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:19.397 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:19.397 + diff -u /tmp/62.Xnl /tmp/spdk_tgt_config.json.zXc 00:05:19.397 + ret=1 00:05:19.397 + echo '=== Start of file: /tmp/62.Xnl ===' 00:05:19.397 + cat /tmp/62.Xnl 00:05:19.397 + echo '=== End of file: /tmp/62.Xnl ===' 00:05:19.397 + echo '' 00:05:19.397 + echo '=== Start of file: /tmp/spdk_tgt_config.json.zXc ===' 00:05:19.397 + cat /tmp/spdk_tgt_config.json.zXc 00:05:19.397 + echo '=== End of file: /tmp/spdk_tgt_config.json.zXc ===' 00:05:19.397 + echo '' 00:05:19.397 + rm /tmp/62.Xnl /tmp/spdk_tgt_config.json.zXc 00:05:19.397 + exit 1 00:05:19.397 INFO: configuration change detected. 00:05:19.397 16:24:56 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:05:19.397 16:24:56 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:05:19.397 16:24:56 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:05:19.397 16:24:56 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:19.397 16:24:56 -- common/autotest_common.sh@10 -- # set +x 00:05:19.397 16:24:56 -- json_config/json_config.sh@360 -- # local ret=0 00:05:19.397 16:24:56 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:05:19.397 16:24:56 -- json_config/json_config.sh@370 -- # [[ -n 68188 ]] 00:05:19.397 16:24:56 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:05:19.397 16:24:56 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:05:19.397 16:24:56 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:19.397 16:24:56 -- common/autotest_common.sh@10 -- # set +x 00:05:19.397 16:24:56 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:05:19.397 16:24:56 -- json_config/json_config.sh@246 -- # uname -s 00:05:19.397 16:24:56 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:05:19.397 16:24:56 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:05:19.397 16:24:56 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:05:19.397 16:24:56 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:05:19.397 16:24:56 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:19.397 16:24:56 -- common/autotest_common.sh@10 -- # set +x 00:05:19.397 16:24:56 -- json_config/json_config.sh@376 -- # killprocess 68188 00:05:19.397 16:24:56 -- common/autotest_common.sh@936 -- # '[' -z 68188 ']' 00:05:19.397 16:24:56 -- common/autotest_common.sh@940 -- # kill -0 68188 00:05:19.397 16:24:56 -- common/autotest_common.sh@941 -- # uname 00:05:19.397 16:24:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:19.397 16:24:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68188 00:05:19.397 16:24:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:19.397 16:24:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:19.397 killing process with pid 68188 00:05:19.397 16:24:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68188' 00:05:19.397 16:24:56 -- common/autotest_common.sh@955 -- # kill 68188 00:05:19.397 16:24:56 -- common/autotest_common.sh@960 -- # wait 68188 00:05:19.655 16:24:57 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:19.655 16:24:57 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:05:19.655 16:24:57 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:19.655 16:24:57 -- common/autotest_common.sh@10 -- # set +x 00:05:19.655 16:24:57 -- json_config/json_config.sh@381 -- # return 0 00:05:19.655 INFO: Success 00:05:19.655 16:24:57 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:05:19.655 00:05:19.655 real 0m8.159s 00:05:19.655 user 0m11.497s 00:05:19.655 sys 0m1.858s 00:05:19.655 16:24:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:19.655 16:24:57 -- common/autotest_common.sh@10 -- # set +x 00:05:19.655 ************************************ 00:05:19.655 END TEST json_config 00:05:19.655 ************************************ 00:05:19.655 16:24:57 -- spdk/autotest.sh@166 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:19.655 16:24:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:19.655 16:24:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:19.655 16:24:57 -- common/autotest_common.sh@10 -- # set +x 00:05:19.914 ************************************ 00:05:19.914 START TEST json_config_extra_key 00:05:19.914 ************************************ 00:05:19.914 16:24:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:19.914 16:24:57 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:19.914 16:24:57 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:19.914 16:24:57 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:19.914 16:24:57 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:19.914 16:24:57 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:19.914 16:24:57 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:19.914 16:24:57 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:19.914 16:24:57 -- scripts/common.sh@335 -- # IFS=.-: 00:05:19.914 16:24:57 -- scripts/common.sh@335 -- # read -ra ver1 00:05:19.914 16:24:57 -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.914 16:24:57 -- scripts/common.sh@336 -- # read -ra ver2 00:05:19.914 16:24:57 -- scripts/common.sh@337 -- # local 'op=<' 00:05:19.914 16:24:57 -- scripts/common.sh@339 -- # ver1_l=2 00:05:19.914 16:24:57 -- scripts/common.sh@340 -- # ver2_l=1 00:05:19.914 16:24:57 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:19.914 16:24:57 -- scripts/common.sh@343 -- # case "$op" in 00:05:19.914 16:24:57 -- scripts/common.sh@344 -- # : 1 00:05:19.914 16:24:57 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:19.914 16:24:57 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.914 16:24:57 -- scripts/common.sh@364 -- # decimal 1 00:05:19.914 16:24:57 -- scripts/common.sh@352 -- # local d=1 00:05:19.914 16:24:57 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.914 16:24:57 -- scripts/common.sh@354 -- # echo 1 00:05:19.914 16:24:57 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:19.914 16:24:57 -- scripts/common.sh@365 -- # decimal 2 00:05:19.914 16:24:57 -- scripts/common.sh@352 -- # local d=2 00:05:19.914 16:24:57 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:19.914 16:24:57 -- scripts/common.sh@354 -- # echo 2 00:05:19.914 16:24:57 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:19.914 16:24:57 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:19.914 16:24:57 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:19.914 16:24:57 -- scripts/common.sh@367 -- # return 0 00:05:19.914 16:24:57 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:19.914 16:24:57 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:19.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.914 --rc genhtml_branch_coverage=1 00:05:19.914 --rc genhtml_function_coverage=1 00:05:19.914 --rc genhtml_legend=1 00:05:19.914 --rc geninfo_all_blocks=1 00:05:19.914 --rc geninfo_unexecuted_blocks=1 00:05:19.914 00:05:19.914 ' 00:05:19.914 16:24:57 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:19.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.914 --rc genhtml_branch_coverage=1 00:05:19.914 --rc genhtml_function_coverage=1 00:05:19.914 --rc genhtml_legend=1 00:05:19.914 --rc geninfo_all_blocks=1 00:05:19.914 --rc geninfo_unexecuted_blocks=1 00:05:19.914 00:05:19.914 ' 00:05:19.914 16:24:57 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:19.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.914 --rc genhtml_branch_coverage=1 00:05:19.915 --rc genhtml_function_coverage=1 00:05:19.915 --rc genhtml_legend=1 00:05:19.915 --rc geninfo_all_blocks=1 00:05:19.915 --rc geninfo_unexecuted_blocks=1 00:05:19.915 00:05:19.915 ' 00:05:19.915 16:24:57 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:19.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.915 --rc genhtml_branch_coverage=1 00:05:19.915 --rc genhtml_function_coverage=1 00:05:19.915 --rc genhtml_legend=1 00:05:19.915 --rc geninfo_all_blocks=1 00:05:19.915 --rc geninfo_unexecuted_blocks=1 00:05:19.915 00:05:19.915 ' 00:05:19.915 16:24:57 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:19.915 16:24:57 -- nvmf/common.sh@7 -- # uname -s 00:05:19.915 16:24:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:19.915 16:24:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:19.915 16:24:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:19.915 16:24:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:19.915 16:24:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:19.915 16:24:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:19.915 16:24:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:19.915 16:24:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:19.915 16:24:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:19.915 16:24:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:19.915 16:24:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:05:19.915 16:24:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:05:19.915 16:24:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:19.915 16:24:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:19.915 16:24:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:19.915 16:24:57 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:19.915 16:24:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:19.915 16:24:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:19.915 16:24:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:19.915 16:24:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.915 16:24:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.915 16:24:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.915 16:24:57 -- paths/export.sh@5 -- # export PATH 00:05:19.915 16:24:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.915 16:24:57 -- nvmf/common.sh@46 -- # : 0 00:05:19.915 16:24:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:19.915 16:24:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:19.915 16:24:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:19.915 16:24:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:19.915 16:24:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:19.915 16:24:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:19.915 16:24:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:19.915 16:24:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:19.915 16:24:57 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:05:19.915 16:24:57 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:05:19.915 16:24:57 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:19.915 16:24:57 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:05:19.915 16:24:57 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:19.915 16:24:57 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:05:19.915 16:24:57 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:19.915 16:24:57 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:05:19.915 16:24:57 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:19.915 16:24:57 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:05:19.915 INFO: launching applications... 00:05:19.915 16:24:57 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:19.915 16:24:57 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:05:19.915 16:24:57 -- json_config/json_config_extra_key.sh@25 -- # shift 00:05:19.915 16:24:57 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:05:19.915 16:24:57 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:05:19.915 16:24:57 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=68371 00:05:19.915 16:24:57 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:05:19.915 Waiting for target to run... 00:05:19.915 16:24:57 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 68371 /var/tmp/spdk_tgt.sock 00:05:19.915 16:24:57 -- common/autotest_common.sh@829 -- # '[' -z 68371 ']' 00:05:19.915 16:24:57 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:19.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:19.915 16:24:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:19.915 16:24:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:19.915 16:24:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:19.915 16:24:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:19.915 16:24:57 -- common/autotest_common.sh@10 -- # set +x 00:05:19.915 [2024-11-16 16:24:57.398565] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:19.915 [2024-11-16 16:24:57.398669] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68371 ] 00:05:20.488 [2024-11-16 16:24:57.931082] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.746 [2024-11-16 16:24:57.996407] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:20.746 [2024-11-16 16:24:57.996544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.005 16:24:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:21.005 16:24:58 -- common/autotest_common.sh@862 -- # return 0 00:05:21.005 00:05:21.005 16:24:58 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:05:21.005 INFO: shutting down applications... 00:05:21.005 16:24:58 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:05:21.005 16:24:58 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:05:21.005 16:24:58 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:05:21.005 16:24:58 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:05:21.005 16:24:58 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 68371 ]] 00:05:21.005 16:24:58 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 68371 00:05:21.005 16:24:58 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:05:21.005 16:24:58 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:21.005 16:24:58 -- json_config/json_config_extra_key.sh@50 -- # kill -0 68371 00:05:21.005 16:24:58 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:21.572 16:24:58 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:21.572 16:24:58 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:21.572 16:24:58 -- json_config/json_config_extra_key.sh@50 -- # kill -0 68371 00:05:21.572 16:24:58 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:05:21.572 16:24:58 -- json_config/json_config_extra_key.sh@52 -- # break 00:05:21.572 16:24:58 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:05:21.572 SPDK target shutdown done 00:05:21.572 16:24:58 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:05:21.572 Success 00:05:21.572 16:24:58 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:05:21.572 00:05:21.572 real 0m1.756s 00:05:21.572 user 0m1.486s 00:05:21.572 sys 0m0.576s 00:05:21.572 16:24:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:21.572 16:24:58 -- common/autotest_common.sh@10 -- # set +x 00:05:21.572 ************************************ 00:05:21.572 END TEST json_config_extra_key 00:05:21.572 ************************************ 00:05:21.572 16:24:58 -- spdk/autotest.sh@167 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:21.572 16:24:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:21.572 16:24:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:21.572 16:24:58 -- common/autotest_common.sh@10 -- # set +x 00:05:21.572 ************************************ 00:05:21.572 START TEST alias_rpc 00:05:21.572 ************************************ 00:05:21.572 16:24:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:21.572 * Looking for test storage... 00:05:21.572 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:21.572 16:24:59 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:21.572 16:24:59 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:21.572 16:24:59 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:21.831 16:24:59 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:21.831 16:24:59 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:21.831 16:24:59 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:21.831 16:24:59 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:21.831 16:24:59 -- scripts/common.sh@335 -- # IFS=.-: 00:05:21.831 16:24:59 -- scripts/common.sh@335 -- # read -ra ver1 00:05:21.831 16:24:59 -- scripts/common.sh@336 -- # IFS=.-: 00:05:21.831 16:24:59 -- scripts/common.sh@336 -- # read -ra ver2 00:05:21.831 16:24:59 -- scripts/common.sh@337 -- # local 'op=<' 00:05:21.831 16:24:59 -- scripts/common.sh@339 -- # ver1_l=2 00:05:21.831 16:24:59 -- scripts/common.sh@340 -- # ver2_l=1 00:05:21.831 16:24:59 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:21.831 16:24:59 -- scripts/common.sh@343 -- # case "$op" in 00:05:21.831 16:24:59 -- scripts/common.sh@344 -- # : 1 00:05:21.831 16:24:59 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:21.831 16:24:59 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:21.831 16:24:59 -- scripts/common.sh@364 -- # decimal 1 00:05:21.831 16:24:59 -- scripts/common.sh@352 -- # local d=1 00:05:21.831 16:24:59 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:21.831 16:24:59 -- scripts/common.sh@354 -- # echo 1 00:05:21.831 16:24:59 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:21.831 16:24:59 -- scripts/common.sh@365 -- # decimal 2 00:05:21.831 16:24:59 -- scripts/common.sh@352 -- # local d=2 00:05:21.831 16:24:59 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:21.831 16:24:59 -- scripts/common.sh@354 -- # echo 2 00:05:21.831 16:24:59 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:21.831 16:24:59 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:21.831 16:24:59 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:21.831 16:24:59 -- scripts/common.sh@367 -- # return 0 00:05:21.831 16:24:59 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:21.831 16:24:59 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:21.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.831 --rc genhtml_branch_coverage=1 00:05:21.831 --rc genhtml_function_coverage=1 00:05:21.831 --rc genhtml_legend=1 00:05:21.831 --rc geninfo_all_blocks=1 00:05:21.831 --rc geninfo_unexecuted_blocks=1 00:05:21.831 00:05:21.831 ' 00:05:21.831 16:24:59 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:21.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.831 --rc genhtml_branch_coverage=1 00:05:21.831 --rc genhtml_function_coverage=1 00:05:21.831 --rc genhtml_legend=1 00:05:21.831 --rc geninfo_all_blocks=1 00:05:21.831 --rc geninfo_unexecuted_blocks=1 00:05:21.831 00:05:21.831 ' 00:05:21.831 16:24:59 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:21.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.831 --rc genhtml_branch_coverage=1 00:05:21.831 --rc genhtml_function_coverage=1 00:05:21.831 --rc genhtml_legend=1 00:05:21.831 --rc geninfo_all_blocks=1 00:05:21.831 --rc geninfo_unexecuted_blocks=1 00:05:21.831 00:05:21.831 ' 00:05:21.831 16:24:59 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:21.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.831 --rc genhtml_branch_coverage=1 00:05:21.831 --rc genhtml_function_coverage=1 00:05:21.831 --rc genhtml_legend=1 00:05:21.831 --rc geninfo_all_blocks=1 00:05:21.831 --rc geninfo_unexecuted_blocks=1 00:05:21.831 00:05:21.831 ' 00:05:21.831 16:24:59 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:21.831 16:24:59 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=68449 00:05:21.831 16:24:59 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 68449 00:05:21.831 16:24:59 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:21.831 16:24:59 -- common/autotest_common.sh@829 -- # '[' -z 68449 ']' 00:05:21.831 16:24:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.831 16:24:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:21.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.831 16:24:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.831 16:24:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:21.831 16:24:59 -- common/autotest_common.sh@10 -- # set +x 00:05:21.831 [2024-11-16 16:24:59.222415] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:21.831 [2024-11-16 16:24:59.222530] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68449 ] 00:05:22.090 [2024-11-16 16:24:59.361222] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.090 [2024-11-16 16:24:59.415306] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:22.090 [2024-11-16 16:24:59.415466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.026 16:25:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:23.026 16:25:00 -- common/autotest_common.sh@862 -- # return 0 00:05:23.026 16:25:00 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:23.026 16:25:00 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 68449 00:05:23.026 16:25:00 -- common/autotest_common.sh@936 -- # '[' -z 68449 ']' 00:05:23.026 16:25:00 -- common/autotest_common.sh@940 -- # kill -0 68449 00:05:23.026 16:25:00 -- common/autotest_common.sh@941 -- # uname 00:05:23.026 16:25:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:23.026 16:25:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68449 00:05:23.284 16:25:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:23.285 16:25:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:23.285 killing process with pid 68449 00:05:23.285 16:25:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68449' 00:05:23.285 16:25:00 -- common/autotest_common.sh@955 -- # kill 68449 00:05:23.285 16:25:00 -- common/autotest_common.sh@960 -- # wait 68449 00:05:23.544 00:05:23.544 real 0m1.881s 00:05:23.544 user 0m2.128s 00:05:23.544 sys 0m0.469s 00:05:23.544 16:25:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:23.544 16:25:00 -- common/autotest_common.sh@10 -- # set +x 00:05:23.544 ************************************ 00:05:23.544 END TEST alias_rpc 00:05:23.544 ************************************ 00:05:23.544 16:25:00 -- spdk/autotest.sh@169 -- # [[ 1 -eq 0 ]] 00:05:23.544 16:25:00 -- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:23.544 16:25:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:23.544 16:25:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:23.544 16:25:00 -- common/autotest_common.sh@10 -- # set +x 00:05:23.544 ************************************ 00:05:23.544 START TEST dpdk_mem_utility 00:05:23.544 ************************************ 00:05:23.544 16:25:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:23.544 * Looking for test storage... 00:05:23.544 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:23.544 16:25:00 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:23.544 16:25:00 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:23.544 16:25:00 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:23.802 16:25:01 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:23.802 16:25:01 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:23.802 16:25:01 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:23.802 16:25:01 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:23.802 16:25:01 -- scripts/common.sh@335 -- # IFS=.-: 00:05:23.802 16:25:01 -- scripts/common.sh@335 -- # read -ra ver1 00:05:23.802 16:25:01 -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.802 16:25:01 -- scripts/common.sh@336 -- # read -ra ver2 00:05:23.802 16:25:01 -- scripts/common.sh@337 -- # local 'op=<' 00:05:23.802 16:25:01 -- scripts/common.sh@339 -- # ver1_l=2 00:05:23.802 16:25:01 -- scripts/common.sh@340 -- # ver2_l=1 00:05:23.802 16:25:01 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:23.802 16:25:01 -- scripts/common.sh@343 -- # case "$op" in 00:05:23.803 16:25:01 -- scripts/common.sh@344 -- # : 1 00:05:23.803 16:25:01 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:23.803 16:25:01 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.803 16:25:01 -- scripts/common.sh@364 -- # decimal 1 00:05:23.803 16:25:01 -- scripts/common.sh@352 -- # local d=1 00:05:23.803 16:25:01 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:23.803 16:25:01 -- scripts/common.sh@354 -- # echo 1 00:05:23.803 16:25:01 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:23.803 16:25:01 -- scripts/common.sh@365 -- # decimal 2 00:05:23.803 16:25:01 -- scripts/common.sh@352 -- # local d=2 00:05:23.803 16:25:01 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.803 16:25:01 -- scripts/common.sh@354 -- # echo 2 00:05:23.803 16:25:01 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:23.803 16:25:01 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:23.803 16:25:01 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:23.803 16:25:01 -- scripts/common.sh@367 -- # return 0 00:05:23.803 16:25:01 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.803 16:25:01 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:23.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.803 --rc genhtml_branch_coverage=1 00:05:23.803 --rc genhtml_function_coverage=1 00:05:23.803 --rc genhtml_legend=1 00:05:23.803 --rc geninfo_all_blocks=1 00:05:23.803 --rc geninfo_unexecuted_blocks=1 00:05:23.803 00:05:23.803 ' 00:05:23.803 16:25:01 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:23.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.803 --rc genhtml_branch_coverage=1 00:05:23.803 --rc genhtml_function_coverage=1 00:05:23.803 --rc genhtml_legend=1 00:05:23.803 --rc geninfo_all_blocks=1 00:05:23.803 --rc geninfo_unexecuted_blocks=1 00:05:23.803 00:05:23.803 ' 00:05:23.803 16:25:01 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:23.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.803 --rc genhtml_branch_coverage=1 00:05:23.803 --rc genhtml_function_coverage=1 00:05:23.803 --rc genhtml_legend=1 00:05:23.803 --rc geninfo_all_blocks=1 00:05:23.803 --rc geninfo_unexecuted_blocks=1 00:05:23.803 00:05:23.803 ' 00:05:23.803 16:25:01 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:23.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.803 --rc genhtml_branch_coverage=1 00:05:23.803 --rc genhtml_function_coverage=1 00:05:23.803 --rc genhtml_legend=1 00:05:23.803 --rc geninfo_all_blocks=1 00:05:23.803 --rc geninfo_unexecuted_blocks=1 00:05:23.803 00:05:23.803 ' 00:05:23.803 16:25:01 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:23.803 16:25:01 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=68548 00:05:23.803 16:25:01 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 68548 00:05:23.803 16:25:01 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:23.803 16:25:01 -- common/autotest_common.sh@829 -- # '[' -z 68548 ']' 00:05:23.803 16:25:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.803 16:25:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:23.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.803 16:25:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.803 16:25:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:23.803 16:25:01 -- common/autotest_common.sh@10 -- # set +x 00:05:23.803 [2024-11-16 16:25:01.144162] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:23.803 [2024-11-16 16:25:01.144261] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68548 ] 00:05:23.803 [2024-11-16 16:25:01.281118] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.062 [2024-11-16 16:25:01.339593] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:24.062 [2024-11-16 16:25:01.339742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.630 16:25:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:24.630 16:25:02 -- common/autotest_common.sh@862 -- # return 0 00:05:24.630 16:25:02 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:24.630 16:25:02 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:24.630 16:25:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.630 16:25:02 -- common/autotest_common.sh@10 -- # set +x 00:05:24.630 { 00:05:24.630 "filename": "/tmp/spdk_mem_dump.txt" 00:05:24.630 } 00:05:24.630 16:25:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.630 16:25:02 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:24.890 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:24.890 1 heaps totaling size 814.000000 MiB 00:05:24.890 size: 814.000000 MiB heap id: 0 00:05:24.890 end heaps---------- 00:05:24.890 8 mempools totaling size 598.116089 MiB 00:05:24.890 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:24.890 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:24.890 size: 84.521057 MiB name: bdev_io_68548 00:05:24.890 size: 51.011292 MiB name: evtpool_68548 00:05:24.890 size: 50.003479 MiB name: msgpool_68548 00:05:24.890 size: 21.763794 MiB name: PDU_Pool 00:05:24.890 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:24.890 size: 0.026123 MiB name: Session_Pool 00:05:24.890 end mempools------- 00:05:24.890 6 memzones totaling size 4.142822 MiB 00:05:24.890 size: 1.000366 MiB name: RG_ring_0_68548 00:05:24.890 size: 1.000366 MiB name: RG_ring_1_68548 00:05:24.890 size: 1.000366 MiB name: RG_ring_4_68548 00:05:24.890 size: 1.000366 MiB name: RG_ring_5_68548 00:05:24.890 size: 0.125366 MiB name: RG_ring_2_68548 00:05:24.890 size: 0.015991 MiB name: RG_ring_3_68548 00:05:24.890 end memzones------- 00:05:24.890 16:25:02 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:24.890 heap id: 0 total size: 814.000000 MiB number of busy elements: 211 number of free elements: 15 00:05:24.890 list of free elements. size: 12.488220 MiB 00:05:24.890 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:24.890 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:24.890 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:24.890 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:24.890 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:24.890 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:24.890 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:24.890 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:24.890 element at address: 0x200000200000 with size: 0.837219 MiB 00:05:24.890 element at address: 0x20001aa00000 with size: 0.572632 MiB 00:05:24.890 element at address: 0x20000b200000 with size: 0.489990 MiB 00:05:24.890 element at address: 0x200000800000 with size: 0.487061 MiB 00:05:24.890 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:24.890 element at address: 0x200027e00000 with size: 0.398865 MiB 00:05:24.890 element at address: 0x200003a00000 with size: 0.351685 MiB 00:05:24.890 list of standard malloc elements. size: 199.249207 MiB 00:05:24.890 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:24.890 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:24.890 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:24.890 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:24.890 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:24.890 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:24.890 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:24.890 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:24.890 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:24.890 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:05:24.890 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:05:24.890 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:05:24.890 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:05:24.890 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:05:24.890 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:05:24.890 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:05:24.890 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:05:24.890 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:05:24.890 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:05:24.890 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:05:24.890 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:05:24.890 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:05:24.890 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:05:24.890 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:05:24.890 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:05:24.890 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:05:24.890 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:05:24.890 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:05:24.890 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:05:24.890 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:05:24.890 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:05:24.890 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:05:24.890 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:05:24.890 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:05:24.890 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:05:24.890 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:05:24.890 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:24.890 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:24.890 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:24.890 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:24.890 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:24.890 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:05:24.890 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:05:24.890 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:05:24.890 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:05:24.890 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:05:24.890 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:05:24.890 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:05:24.890 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:05:24.890 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:05:24.890 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:05:24.890 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:05:24.890 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:05:24.890 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:05:24.890 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:05:24.890 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:05:24.890 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:05:24.890 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:05:24.890 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:05:24.890 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:05:24.890 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:05:24.890 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:05:24.890 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:24.890 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:24.890 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:24.890 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:24.890 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:24.890 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:24.890 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:24.890 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:05:24.890 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:05:24.891 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:05:24.891 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:24.891 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:24.891 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:24.891 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:24.891 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:24.891 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:24.891 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:24.891 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:05:24.891 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:05:24.891 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:05:24.891 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:05:24.891 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:05:24.891 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:05:24.891 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:05:24.891 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:05:24.891 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:05:24.891 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:05:24.891 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:05:24.891 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:05:24.891 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:05:24.891 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:05:24.891 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:05:24.891 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:05:24.891 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:05:24.891 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:05:24.891 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:05:24.891 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:05:24.891 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:05:24.891 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:05:24.891 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:05:24.891 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:05:24.891 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:05:24.891 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:05:24.891 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:05:24.891 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:05:24.891 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:05:24.891 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:05:24.891 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:05:24.891 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:05:24.891 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:05:24.891 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:05:24.891 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:05:24.891 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:05:24.891 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:05:24.891 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:05:24.891 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:05:24.891 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:05:24.891 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:05:24.891 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:05:24.891 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:05:24.891 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:05:24.891 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:05:24.891 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:05:24.891 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:05:24.891 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:05:24.891 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:05:24.891 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:05:24.891 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:05:24.891 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:05:24.891 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:05:24.891 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:05:24.891 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:05:24.891 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:05:24.891 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:24.891 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:24.891 element at address: 0x200027e661c0 with size: 0.000183 MiB 00:05:24.891 element at address: 0x200027e66280 with size: 0.000183 MiB 00:05:24.891 element at address: 0x200027e6ce80 with size: 0.000183 MiB 00:05:24.891 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:05:24.891 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:05:24.891 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:05:24.891 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:05:24.891 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:05:24.891 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:05:24.891 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:05:24.891 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:05:24.891 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:05:24.891 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:05:24.891 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:05:24.891 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:05:24.891 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:05:24.891 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:05:24.891 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:05:24.891 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:05:24.891 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:05:24.891 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:05:24.891 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:05:24.891 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:05:24.891 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:05:24.891 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:05:24.891 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:05:24.891 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:05:24.891 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:05:24.891 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:05:24.891 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:05:24.891 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:05:24.891 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:05:24.891 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:05:24.891 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:05:24.891 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:05:24.891 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:05:24.891 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:05:24.891 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:05:24.891 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:05:24.891 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:05:24.891 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:05:24.891 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:05:24.891 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:05:24.891 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:05:24.891 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:05:24.891 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:05:24.891 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:05:24.891 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:05:24.891 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:05:24.891 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:05:24.891 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:05:24.891 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:05:24.891 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:05:24.891 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:05:24.891 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:05:24.891 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:05:24.891 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:05:24.891 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:05:24.891 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:05:24.891 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:05:24.891 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:05:24.891 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:05:24.891 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:05:24.891 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:05:24.891 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:24.891 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:24.891 list of memzone associated elements. size: 602.262573 MiB 00:05:24.891 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:24.891 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:24.891 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:24.891 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:24.891 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:24.891 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_68548_0 00:05:24.891 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:24.891 associated memzone info: size: 48.002930 MiB name: MP_evtpool_68548_0 00:05:24.891 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:24.891 associated memzone info: size: 48.002930 MiB name: MP_msgpool_68548_0 00:05:24.891 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:24.891 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:24.891 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:24.891 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:24.891 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:24.891 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_68548 00:05:24.891 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:24.891 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_68548 00:05:24.891 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:24.891 associated memzone info: size: 1.007996 MiB name: MP_evtpool_68548 00:05:24.892 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:24.892 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:24.892 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:24.892 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:24.892 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:24.892 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:24.892 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:24.892 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:24.892 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:24.892 associated memzone info: size: 1.000366 MiB name: RG_ring_0_68548 00:05:24.892 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:24.892 associated memzone info: size: 1.000366 MiB name: RG_ring_1_68548 00:05:24.892 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:24.892 associated memzone info: size: 1.000366 MiB name: RG_ring_4_68548 00:05:24.892 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:24.892 associated memzone info: size: 1.000366 MiB name: RG_ring_5_68548 00:05:24.892 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:24.892 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_68548 00:05:24.892 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:24.892 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:24.892 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:24.892 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:24.892 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:24.892 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:24.892 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:24.892 associated memzone info: size: 0.125366 MiB name: RG_ring_2_68548 00:05:24.892 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:24.892 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:24.892 element at address: 0x200027e66340 with size: 0.023743 MiB 00:05:24.892 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:24.892 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:24.892 associated memzone info: size: 0.015991 MiB name: RG_ring_3_68548 00:05:24.892 element at address: 0x200027e6c480 with size: 0.002441 MiB 00:05:24.892 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:24.892 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:05:24.892 associated memzone info: size: 0.000183 MiB name: MP_msgpool_68548 00:05:24.892 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:24.892 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_68548 00:05:24.892 element at address: 0x200027e6cf40 with size: 0.000305 MiB 00:05:24.892 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:24.892 16:25:02 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:24.892 16:25:02 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 68548 00:05:24.892 16:25:02 -- common/autotest_common.sh@936 -- # '[' -z 68548 ']' 00:05:24.892 16:25:02 -- common/autotest_common.sh@940 -- # kill -0 68548 00:05:24.892 16:25:02 -- common/autotest_common.sh@941 -- # uname 00:05:24.892 16:25:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:24.892 16:25:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68548 00:05:24.892 16:25:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:24.892 killing process with pid 68548 00:05:24.892 16:25:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:24.892 16:25:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68548' 00:05:24.892 16:25:02 -- common/autotest_common.sh@955 -- # kill 68548 00:05:24.892 16:25:02 -- common/autotest_common.sh@960 -- # wait 68548 00:05:25.151 00:05:25.151 real 0m1.722s 00:05:25.151 user 0m1.823s 00:05:25.151 sys 0m0.468s 00:05:25.151 16:25:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:25.151 ************************************ 00:05:25.151 END TEST dpdk_mem_utility 00:05:25.151 ************************************ 00:05:25.151 16:25:02 -- common/autotest_common.sh@10 -- # set +x 00:05:25.409 16:25:02 -- spdk/autotest.sh@174 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:25.409 16:25:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:25.409 16:25:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:25.409 16:25:02 -- common/autotest_common.sh@10 -- # set +x 00:05:25.409 ************************************ 00:05:25.409 START TEST event 00:05:25.409 ************************************ 00:05:25.409 16:25:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:25.409 * Looking for test storage... 00:05:25.409 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:25.409 16:25:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:25.409 16:25:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:25.409 16:25:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:25.409 16:25:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:25.409 16:25:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:25.409 16:25:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:25.409 16:25:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:25.409 16:25:02 -- scripts/common.sh@335 -- # IFS=.-: 00:05:25.409 16:25:02 -- scripts/common.sh@335 -- # read -ra ver1 00:05:25.409 16:25:02 -- scripts/common.sh@336 -- # IFS=.-: 00:05:25.409 16:25:02 -- scripts/common.sh@336 -- # read -ra ver2 00:05:25.409 16:25:02 -- scripts/common.sh@337 -- # local 'op=<' 00:05:25.409 16:25:02 -- scripts/common.sh@339 -- # ver1_l=2 00:05:25.409 16:25:02 -- scripts/common.sh@340 -- # ver2_l=1 00:05:25.409 16:25:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:25.409 16:25:02 -- scripts/common.sh@343 -- # case "$op" in 00:05:25.409 16:25:02 -- scripts/common.sh@344 -- # : 1 00:05:25.410 16:25:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:25.410 16:25:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:25.410 16:25:02 -- scripts/common.sh@364 -- # decimal 1 00:05:25.410 16:25:02 -- scripts/common.sh@352 -- # local d=1 00:05:25.410 16:25:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:25.410 16:25:02 -- scripts/common.sh@354 -- # echo 1 00:05:25.410 16:25:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:25.410 16:25:02 -- scripts/common.sh@365 -- # decimal 2 00:05:25.410 16:25:02 -- scripts/common.sh@352 -- # local d=2 00:05:25.410 16:25:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:25.410 16:25:02 -- scripts/common.sh@354 -- # echo 2 00:05:25.410 16:25:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:25.410 16:25:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:25.410 16:25:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:25.410 16:25:02 -- scripts/common.sh@367 -- # return 0 00:05:25.410 16:25:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:25.410 16:25:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:25.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.410 --rc genhtml_branch_coverage=1 00:05:25.410 --rc genhtml_function_coverage=1 00:05:25.410 --rc genhtml_legend=1 00:05:25.410 --rc geninfo_all_blocks=1 00:05:25.410 --rc geninfo_unexecuted_blocks=1 00:05:25.410 00:05:25.410 ' 00:05:25.410 16:25:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:25.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.410 --rc genhtml_branch_coverage=1 00:05:25.410 --rc genhtml_function_coverage=1 00:05:25.410 --rc genhtml_legend=1 00:05:25.410 --rc geninfo_all_blocks=1 00:05:25.410 --rc geninfo_unexecuted_blocks=1 00:05:25.410 00:05:25.410 ' 00:05:25.410 16:25:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:25.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.410 --rc genhtml_branch_coverage=1 00:05:25.410 --rc genhtml_function_coverage=1 00:05:25.410 --rc genhtml_legend=1 00:05:25.410 --rc geninfo_all_blocks=1 00:05:25.410 --rc geninfo_unexecuted_blocks=1 00:05:25.410 00:05:25.410 ' 00:05:25.410 16:25:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:25.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.410 --rc genhtml_branch_coverage=1 00:05:25.410 --rc genhtml_function_coverage=1 00:05:25.410 --rc genhtml_legend=1 00:05:25.410 --rc geninfo_all_blocks=1 00:05:25.410 --rc geninfo_unexecuted_blocks=1 00:05:25.410 00:05:25.410 ' 00:05:25.410 16:25:02 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:25.410 16:25:02 -- bdev/nbd_common.sh@6 -- # set -e 00:05:25.410 16:25:02 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:25.410 16:25:02 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:25.410 16:25:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:25.410 16:25:02 -- common/autotest_common.sh@10 -- # set +x 00:05:25.410 ************************************ 00:05:25.410 START TEST event_perf 00:05:25.410 ************************************ 00:05:25.410 16:25:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:25.410 Running I/O for 1 seconds...[2024-11-16 16:25:02.853796] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:25.410 [2024-11-16 16:25:02.853887] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68650 ] 00:05:25.669 [2024-11-16 16:25:02.989762] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:25.669 [2024-11-16 16:25:03.044101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.669 [2024-11-16 16:25:03.044259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:25.669 [2024-11-16 16:25:03.044433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:25.669 Running I/O for 1 seconds...[2024-11-16 16:25:03.044634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.046 00:05:27.046 lcore 0: 130323 00:05:27.046 lcore 1: 130320 00:05:27.046 lcore 2: 130320 00:05:27.046 lcore 3: 130322 00:05:27.046 done. 00:05:27.046 00:05:27.046 real 0m1.275s 00:05:27.046 user 0m4.094s 00:05:27.046 sys 0m0.063s 00:05:27.046 16:25:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:27.046 16:25:04 -- common/autotest_common.sh@10 -- # set +x 00:05:27.046 ************************************ 00:05:27.046 END TEST event_perf 00:05:27.046 ************************************ 00:05:27.046 16:25:04 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:27.046 16:25:04 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:27.046 16:25:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:27.046 16:25:04 -- common/autotest_common.sh@10 -- # set +x 00:05:27.046 ************************************ 00:05:27.046 START TEST event_reactor 00:05:27.046 ************************************ 00:05:27.046 16:25:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:27.046 [2024-11-16 16:25:04.183085] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:27.046 [2024-11-16 16:25:04.183199] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68683 ] 00:05:27.046 [2024-11-16 16:25:04.315946] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.047 [2024-11-16 16:25:04.366160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.983 test_start 00:05:27.983 oneshot 00:05:27.983 tick 100 00:05:27.983 tick 100 00:05:27.983 tick 250 00:05:27.983 tick 100 00:05:27.983 tick 100 00:05:27.983 tick 100 00:05:27.983 tick 250 00:05:27.983 tick 500 00:05:27.983 tick 100 00:05:27.983 tick 100 00:05:27.983 tick 250 00:05:27.983 tick 100 00:05:27.983 tick 100 00:05:27.983 test_end 00:05:27.983 00:05:27.983 real 0m1.249s 00:05:27.983 user 0m1.096s 00:05:27.983 sys 0m0.048s 00:05:27.983 16:25:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:27.983 ************************************ 00:05:27.983 END TEST event_reactor 00:05:27.983 16:25:05 -- common/autotest_common.sh@10 -- # set +x 00:05:27.983 ************************************ 00:05:27.983 16:25:05 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:27.983 16:25:05 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:27.983 16:25:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:27.983 16:25:05 -- common/autotest_common.sh@10 -- # set +x 00:05:27.983 ************************************ 00:05:27.983 START TEST event_reactor_perf 00:05:27.983 ************************************ 00:05:27.983 16:25:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:28.243 [2024-11-16 16:25:05.480649] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:28.243 [2024-11-16 16:25:05.480733] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68724 ] 00:05:28.243 [2024-11-16 16:25:05.616933] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.243 [2024-11-16 16:25:05.669075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.620 test_start 00:05:29.620 test_end 00:05:29.620 Performance: 467909 events per second 00:05:29.620 00:05:29.620 real 0m1.254s 00:05:29.620 user 0m1.103s 00:05:29.620 sys 0m0.045s 00:05:29.620 16:25:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:29.620 16:25:06 -- common/autotest_common.sh@10 -- # set +x 00:05:29.620 ************************************ 00:05:29.620 END TEST event_reactor_perf 00:05:29.620 ************************************ 00:05:29.620 16:25:06 -- event/event.sh@49 -- # uname -s 00:05:29.620 16:25:06 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:29.620 16:25:06 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:29.620 16:25:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:29.620 16:25:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:29.620 16:25:06 -- common/autotest_common.sh@10 -- # set +x 00:05:29.620 ************************************ 00:05:29.620 START TEST event_scheduler 00:05:29.620 ************************************ 00:05:29.620 16:25:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:29.620 * Looking for test storage... 00:05:29.620 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:29.620 16:25:06 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:29.620 16:25:06 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:29.620 16:25:06 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:29.620 16:25:06 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:29.620 16:25:06 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:29.620 16:25:06 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:29.620 16:25:06 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:29.620 16:25:06 -- scripts/common.sh@335 -- # IFS=.-: 00:05:29.620 16:25:06 -- scripts/common.sh@335 -- # read -ra ver1 00:05:29.620 16:25:06 -- scripts/common.sh@336 -- # IFS=.-: 00:05:29.620 16:25:06 -- scripts/common.sh@336 -- # read -ra ver2 00:05:29.620 16:25:06 -- scripts/common.sh@337 -- # local 'op=<' 00:05:29.620 16:25:06 -- scripts/common.sh@339 -- # ver1_l=2 00:05:29.620 16:25:06 -- scripts/common.sh@340 -- # ver2_l=1 00:05:29.620 16:25:06 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:29.620 16:25:06 -- scripts/common.sh@343 -- # case "$op" in 00:05:29.620 16:25:06 -- scripts/common.sh@344 -- # : 1 00:05:29.620 16:25:06 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:29.620 16:25:06 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:29.620 16:25:06 -- scripts/common.sh@364 -- # decimal 1 00:05:29.620 16:25:06 -- scripts/common.sh@352 -- # local d=1 00:05:29.620 16:25:06 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:29.620 16:25:06 -- scripts/common.sh@354 -- # echo 1 00:05:29.620 16:25:06 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:29.620 16:25:06 -- scripts/common.sh@365 -- # decimal 2 00:05:29.620 16:25:06 -- scripts/common.sh@352 -- # local d=2 00:05:29.620 16:25:06 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:29.620 16:25:06 -- scripts/common.sh@354 -- # echo 2 00:05:29.620 16:25:06 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:29.620 16:25:06 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:29.620 16:25:06 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:29.620 16:25:06 -- scripts/common.sh@367 -- # return 0 00:05:29.620 16:25:06 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:29.620 16:25:06 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:29.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.620 --rc genhtml_branch_coverage=1 00:05:29.620 --rc genhtml_function_coverage=1 00:05:29.620 --rc genhtml_legend=1 00:05:29.620 --rc geninfo_all_blocks=1 00:05:29.620 --rc geninfo_unexecuted_blocks=1 00:05:29.620 00:05:29.620 ' 00:05:29.620 16:25:06 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:29.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.620 --rc genhtml_branch_coverage=1 00:05:29.620 --rc genhtml_function_coverage=1 00:05:29.620 --rc genhtml_legend=1 00:05:29.620 --rc geninfo_all_blocks=1 00:05:29.620 --rc geninfo_unexecuted_blocks=1 00:05:29.620 00:05:29.620 ' 00:05:29.620 16:25:06 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:29.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.620 --rc genhtml_branch_coverage=1 00:05:29.620 --rc genhtml_function_coverage=1 00:05:29.620 --rc genhtml_legend=1 00:05:29.620 --rc geninfo_all_blocks=1 00:05:29.620 --rc geninfo_unexecuted_blocks=1 00:05:29.620 00:05:29.620 ' 00:05:29.620 16:25:06 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:29.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.620 --rc genhtml_branch_coverage=1 00:05:29.620 --rc genhtml_function_coverage=1 00:05:29.620 --rc genhtml_legend=1 00:05:29.620 --rc geninfo_all_blocks=1 00:05:29.620 --rc geninfo_unexecuted_blocks=1 00:05:29.620 00:05:29.620 ' 00:05:29.620 16:25:06 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:29.620 16:25:06 -- scheduler/scheduler.sh@35 -- # scheduler_pid=68787 00:05:29.620 16:25:06 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:29.620 16:25:06 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:29.620 16:25:06 -- scheduler/scheduler.sh@37 -- # waitforlisten 68787 00:05:29.620 16:25:06 -- common/autotest_common.sh@829 -- # '[' -z 68787 ']' 00:05:29.620 16:25:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.620 16:25:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:29.620 16:25:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.620 16:25:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:29.620 16:25:06 -- common/autotest_common.sh@10 -- # set +x 00:05:29.620 [2024-11-16 16:25:07.016349] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:29.620 [2024-11-16 16:25:07.016615] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68787 ] 00:05:29.879 [2024-11-16 16:25:07.158648] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:29.879 [2024-11-16 16:25:07.258807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.879 [2024-11-16 16:25:07.258950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:29.879 [2024-11-16 16:25:07.259085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:29.879 [2024-11-16 16:25:07.259095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:30.815 16:25:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:30.815 16:25:08 -- common/autotest_common.sh@862 -- # return 0 00:05:30.815 16:25:08 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:30.815 16:25:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.815 16:25:08 -- common/autotest_common.sh@10 -- # set +x 00:05:30.815 POWER: Env isn't set yet! 00:05:30.815 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:30.815 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:30.815 POWER: Cannot set governor of lcore 0 to userspace 00:05:30.815 POWER: Attempting to initialise PSTAT power management... 00:05:30.815 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:30.815 POWER: Cannot set governor of lcore 0 to performance 00:05:30.815 POWER: Attempting to initialise AMD PSTATE power management... 00:05:30.815 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:30.815 POWER: Cannot set governor of lcore 0 to userspace 00:05:30.815 POWER: Attempting to initialise CPPC power management... 00:05:30.815 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:30.815 POWER: Cannot set governor of lcore 0 to userspace 00:05:30.815 POWER: Attempting to initialise VM power management... 00:05:30.815 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:30.815 POWER: Unable to set Power Management Environment for lcore 0 00:05:30.815 [2024-11-16 16:25:08.045825] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:05:30.815 [2024-11-16 16:25:08.045839] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:05:30.815 [2024-11-16 16:25:08.045848] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:05:30.815 [2024-11-16 16:25:08.045861] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:30.815 [2024-11-16 16:25:08.045868] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:30.815 [2024-11-16 16:25:08.045875] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:30.815 16:25:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.815 16:25:08 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:30.815 16:25:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.815 16:25:08 -- common/autotest_common.sh@10 -- # set +x 00:05:30.816 [2024-11-16 16:25:08.163071] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:30.816 16:25:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.816 16:25:08 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:30.816 16:25:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:30.816 16:25:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:30.816 16:25:08 -- common/autotest_common.sh@10 -- # set +x 00:05:30.816 ************************************ 00:05:30.816 START TEST scheduler_create_thread 00:05:30.816 ************************************ 00:05:30.816 16:25:08 -- common/autotest_common.sh@1114 -- # scheduler_create_thread 00:05:30.816 16:25:08 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:30.816 16:25:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.816 16:25:08 -- common/autotest_common.sh@10 -- # set +x 00:05:30.816 2 00:05:30.816 16:25:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.816 16:25:08 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:30.816 16:25:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.816 16:25:08 -- common/autotest_common.sh@10 -- # set +x 00:05:30.816 3 00:05:30.816 16:25:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.816 16:25:08 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:30.816 16:25:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.816 16:25:08 -- common/autotest_common.sh@10 -- # set +x 00:05:30.816 4 00:05:30.816 16:25:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.816 16:25:08 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:30.816 16:25:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.816 16:25:08 -- common/autotest_common.sh@10 -- # set +x 00:05:30.816 5 00:05:30.816 16:25:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.816 16:25:08 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:30.816 16:25:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.816 16:25:08 -- common/autotest_common.sh@10 -- # set +x 00:05:30.816 6 00:05:30.816 16:25:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.816 16:25:08 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:30.816 16:25:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.816 16:25:08 -- common/autotest_common.sh@10 -- # set +x 00:05:30.816 7 00:05:30.816 16:25:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.816 16:25:08 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:30.816 16:25:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.816 16:25:08 -- common/autotest_common.sh@10 -- # set +x 00:05:30.816 8 00:05:30.816 16:25:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.816 16:25:08 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:30.816 16:25:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.816 16:25:08 -- common/autotest_common.sh@10 -- # set +x 00:05:30.816 9 00:05:30.816 16:25:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.816 16:25:08 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:30.816 16:25:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.816 16:25:08 -- common/autotest_common.sh@10 -- # set +x 00:05:30.816 10 00:05:30.816 16:25:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.816 16:25:08 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:30.816 16:25:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.816 16:25:08 -- common/autotest_common.sh@10 -- # set +x 00:05:30.816 16:25:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.816 16:25:08 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:30.816 16:25:08 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:30.816 16:25:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.816 16:25:08 -- common/autotest_common.sh@10 -- # set +x 00:05:30.816 16:25:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.816 16:25:08 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:30.816 16:25:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.816 16:25:08 -- common/autotest_common.sh@10 -- # set +x 00:05:32.719 16:25:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.719 16:25:09 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:32.719 16:25:09 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:32.719 16:25:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.719 16:25:09 -- common/autotest_common.sh@10 -- # set +x 00:05:33.657 16:25:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.657 00:05:33.657 real 0m2.612s 00:05:33.657 user 0m0.014s 00:05:33.657 sys 0m0.003s 00:05:33.657 16:25:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:33.657 16:25:10 -- common/autotest_common.sh@10 -- # set +x 00:05:33.657 ************************************ 00:05:33.657 END TEST scheduler_create_thread 00:05:33.657 ************************************ 00:05:33.657 16:25:10 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:33.657 16:25:10 -- scheduler/scheduler.sh@46 -- # killprocess 68787 00:05:33.657 16:25:10 -- common/autotest_common.sh@936 -- # '[' -z 68787 ']' 00:05:33.657 16:25:10 -- common/autotest_common.sh@940 -- # kill -0 68787 00:05:33.657 16:25:10 -- common/autotest_common.sh@941 -- # uname 00:05:33.657 16:25:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:33.657 16:25:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68787 00:05:33.657 killing process with pid 68787 00:05:33.657 16:25:10 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:33.657 16:25:10 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:33.657 16:25:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68787' 00:05:33.657 16:25:10 -- common/autotest_common.sh@955 -- # kill 68787 00:05:33.657 16:25:10 -- common/autotest_common.sh@960 -- # wait 68787 00:05:33.916 [2024-11-16 16:25:11.266980] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:34.175 ************************************ 00:05:34.175 END TEST event_scheduler 00:05:34.175 ************************************ 00:05:34.175 00:05:34.175 real 0m4.759s 00:05:34.175 user 0m8.922s 00:05:34.175 sys 0m0.461s 00:05:34.175 16:25:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:34.175 16:25:11 -- common/autotest_common.sh@10 -- # set +x 00:05:34.175 16:25:11 -- event/event.sh@51 -- # modprobe -n nbd 00:05:34.175 16:25:11 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:34.175 16:25:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:34.175 16:25:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:34.175 16:25:11 -- common/autotest_common.sh@10 -- # set +x 00:05:34.175 ************************************ 00:05:34.175 START TEST app_repeat 00:05:34.175 ************************************ 00:05:34.175 16:25:11 -- common/autotest_common.sh@1114 -- # app_repeat_test 00:05:34.175 16:25:11 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.175 16:25:11 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.175 16:25:11 -- event/event.sh@13 -- # local nbd_list 00:05:34.175 16:25:11 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:34.175 16:25:11 -- event/event.sh@14 -- # local bdev_list 00:05:34.175 16:25:11 -- event/event.sh@15 -- # local repeat_times=4 00:05:34.175 16:25:11 -- event/event.sh@17 -- # modprobe nbd 00:05:34.175 16:25:11 -- event/event.sh@19 -- # repeat_pid=68905 00:05:34.175 Process app_repeat pid: 68905 00:05:34.175 16:25:11 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:34.175 16:25:11 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:34.175 16:25:11 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 68905' 00:05:34.175 16:25:11 -- event/event.sh@23 -- # for i in {0..2} 00:05:34.175 spdk_app_start Round 0 00:05:34.175 16:25:11 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:34.175 16:25:11 -- event/event.sh@25 -- # waitforlisten 68905 /var/tmp/spdk-nbd.sock 00:05:34.175 16:25:11 -- common/autotest_common.sh@829 -- # '[' -z 68905 ']' 00:05:34.175 16:25:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:34.175 16:25:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:34.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:34.175 16:25:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:34.175 16:25:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:34.175 16:25:11 -- common/autotest_common.sh@10 -- # set +x 00:05:34.175 [2024-11-16 16:25:11.627186] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:34.175 [2024-11-16 16:25:11.627278] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68905 ] 00:05:34.434 [2024-11-16 16:25:11.759248] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:34.434 [2024-11-16 16:25:11.814096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.434 [2024-11-16 16:25:11.814116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.371 16:25:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:35.371 16:25:12 -- common/autotest_common.sh@862 -- # return 0 00:05:35.371 16:25:12 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:35.629 Malloc0 00:05:35.629 16:25:12 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:35.890 Malloc1 00:05:35.890 16:25:13 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:35.890 16:25:13 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.890 16:25:13 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:35.890 16:25:13 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:35.890 16:25:13 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.890 16:25:13 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:35.890 16:25:13 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:35.890 16:25:13 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.890 16:25:13 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:35.890 16:25:13 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:35.890 16:25:13 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.890 16:25:13 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:35.890 16:25:13 -- bdev/nbd_common.sh@12 -- # local i 00:05:35.890 16:25:13 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:35.890 16:25:13 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:35.890 16:25:13 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:36.167 /dev/nbd0 00:05:36.167 16:25:13 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:36.167 16:25:13 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:36.167 16:25:13 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:36.167 16:25:13 -- common/autotest_common.sh@867 -- # local i 00:05:36.167 16:25:13 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:36.167 16:25:13 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:36.167 16:25:13 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:36.167 16:25:13 -- common/autotest_common.sh@871 -- # break 00:05:36.167 16:25:13 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:36.167 16:25:13 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:36.167 16:25:13 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:36.167 1+0 records in 00:05:36.167 1+0 records out 00:05:36.167 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000223124 s, 18.4 MB/s 00:05:36.167 16:25:13 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:36.167 16:25:13 -- common/autotest_common.sh@884 -- # size=4096 00:05:36.167 16:25:13 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:36.167 16:25:13 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:36.167 16:25:13 -- common/autotest_common.sh@887 -- # return 0 00:05:36.167 16:25:13 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:36.167 16:25:13 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.167 16:25:13 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:36.450 /dev/nbd1 00:05:36.450 16:25:13 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:36.450 16:25:13 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:36.450 16:25:13 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:36.450 16:25:13 -- common/autotest_common.sh@867 -- # local i 00:05:36.450 16:25:13 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:36.450 16:25:13 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:36.450 16:25:13 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:36.450 16:25:13 -- common/autotest_common.sh@871 -- # break 00:05:36.450 16:25:13 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:36.450 16:25:13 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:36.450 16:25:13 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:36.450 1+0 records in 00:05:36.450 1+0 records out 00:05:36.450 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000336936 s, 12.2 MB/s 00:05:36.450 16:25:13 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:36.450 16:25:13 -- common/autotest_common.sh@884 -- # size=4096 00:05:36.450 16:25:13 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:36.450 16:25:13 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:36.450 16:25:13 -- common/autotest_common.sh@887 -- # return 0 00:05:36.450 16:25:13 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:36.450 16:25:13 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.450 16:25:13 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:36.450 16:25:13 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.450 16:25:13 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:36.718 16:25:14 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:36.718 { 00:05:36.718 "bdev_name": "Malloc0", 00:05:36.718 "nbd_device": "/dev/nbd0" 00:05:36.718 }, 00:05:36.718 { 00:05:36.718 "bdev_name": "Malloc1", 00:05:36.718 "nbd_device": "/dev/nbd1" 00:05:36.718 } 00:05:36.718 ]' 00:05:36.718 16:25:14 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:36.718 16:25:14 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:36.718 { 00:05:36.718 "bdev_name": "Malloc0", 00:05:36.718 "nbd_device": "/dev/nbd0" 00:05:36.718 }, 00:05:36.718 { 00:05:36.718 "bdev_name": "Malloc1", 00:05:36.718 "nbd_device": "/dev/nbd1" 00:05:36.718 } 00:05:36.718 ]' 00:05:36.718 16:25:14 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:36.718 /dev/nbd1' 00:05:36.718 16:25:14 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:36.718 /dev/nbd1' 00:05:36.718 16:25:14 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:36.718 16:25:14 -- bdev/nbd_common.sh@65 -- # count=2 00:05:36.718 16:25:14 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:36.718 16:25:14 -- bdev/nbd_common.sh@95 -- # count=2 00:05:36.718 16:25:14 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:36.718 16:25:14 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:36.718 16:25:14 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.718 16:25:14 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:36.718 16:25:14 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:36.718 16:25:14 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:36.718 16:25:14 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:36.718 16:25:14 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:36.718 256+0 records in 00:05:36.718 256+0 records out 00:05:36.718 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00987873 s, 106 MB/s 00:05:36.718 16:25:14 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:36.718 16:25:14 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:36.718 256+0 records in 00:05:36.718 256+0 records out 00:05:36.718 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0232052 s, 45.2 MB/s 00:05:36.718 16:25:14 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:36.718 16:25:14 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:36.718 256+0 records in 00:05:36.718 256+0 records out 00:05:36.718 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0269168 s, 39.0 MB/s 00:05:36.718 16:25:14 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:36.718 16:25:14 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.718 16:25:14 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:36.718 16:25:14 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:36.718 16:25:14 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:36.718 16:25:14 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:36.718 16:25:14 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:36.718 16:25:14 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:36.718 16:25:14 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:36.718 16:25:14 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:36.718 16:25:14 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:36.718 16:25:14 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:36.718 16:25:14 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:36.718 16:25:14 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.718 16:25:14 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.718 16:25:14 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:36.718 16:25:14 -- bdev/nbd_common.sh@51 -- # local i 00:05:36.718 16:25:14 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:36.718 16:25:14 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:36.977 16:25:14 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:36.977 16:25:14 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:36.977 16:25:14 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:36.977 16:25:14 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:36.977 16:25:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:36.977 16:25:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:36.977 16:25:14 -- bdev/nbd_common.sh@41 -- # break 00:05:36.977 16:25:14 -- bdev/nbd_common.sh@45 -- # return 0 00:05:36.977 16:25:14 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:36.977 16:25:14 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:37.236 16:25:14 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:37.236 16:25:14 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:37.236 16:25:14 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:37.236 16:25:14 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:37.236 16:25:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:37.236 16:25:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:37.236 16:25:14 -- bdev/nbd_common.sh@41 -- # break 00:05:37.236 16:25:14 -- bdev/nbd_common.sh@45 -- # return 0 00:05:37.236 16:25:14 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:37.236 16:25:14 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.236 16:25:14 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:37.495 16:25:14 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:37.495 16:25:14 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:37.495 16:25:14 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:37.495 16:25:14 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:37.495 16:25:14 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:37.495 16:25:14 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:37.754 16:25:14 -- bdev/nbd_common.sh@65 -- # true 00:05:37.754 16:25:14 -- bdev/nbd_common.sh@65 -- # count=0 00:05:37.754 16:25:14 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:37.754 16:25:14 -- bdev/nbd_common.sh@104 -- # count=0 00:05:37.754 16:25:14 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:37.754 16:25:14 -- bdev/nbd_common.sh@109 -- # return 0 00:05:37.754 16:25:14 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:38.013 16:25:15 -- event/event.sh@35 -- # sleep 3 00:05:38.272 [2024-11-16 16:25:15.542957] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:38.272 [2024-11-16 16:25:15.594691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:38.272 [2024-11-16 16:25:15.594711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.272 [2024-11-16 16:25:15.666331] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:38.272 [2024-11-16 16:25:15.666407] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:41.559 16:25:18 -- event/event.sh@23 -- # for i in {0..2} 00:05:41.559 spdk_app_start Round 1 00:05:41.559 16:25:18 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:41.559 16:25:18 -- event/event.sh@25 -- # waitforlisten 68905 /var/tmp/spdk-nbd.sock 00:05:41.559 16:25:18 -- common/autotest_common.sh@829 -- # '[' -z 68905 ']' 00:05:41.559 16:25:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:41.559 16:25:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:41.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:41.559 16:25:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:41.559 16:25:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:41.559 16:25:18 -- common/autotest_common.sh@10 -- # set +x 00:05:41.559 16:25:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:41.559 16:25:18 -- common/autotest_common.sh@862 -- # return 0 00:05:41.559 16:25:18 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:41.559 Malloc0 00:05:41.559 16:25:18 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:41.559 Malloc1 00:05:41.559 16:25:19 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:41.559 16:25:19 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.559 16:25:19 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:41.559 16:25:19 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:41.559 16:25:19 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.559 16:25:19 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:41.559 16:25:19 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:41.559 16:25:19 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.559 16:25:19 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:41.559 16:25:19 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:41.559 16:25:19 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.559 16:25:19 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:41.559 16:25:19 -- bdev/nbd_common.sh@12 -- # local i 00:05:41.559 16:25:19 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:41.559 16:25:19 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:41.559 16:25:19 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:41.818 /dev/nbd0 00:05:41.818 16:25:19 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:41.818 16:25:19 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:41.818 16:25:19 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:41.818 16:25:19 -- common/autotest_common.sh@867 -- # local i 00:05:41.818 16:25:19 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:41.818 16:25:19 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:41.818 16:25:19 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:41.818 16:25:19 -- common/autotest_common.sh@871 -- # break 00:05:41.818 16:25:19 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:41.818 16:25:19 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:41.818 16:25:19 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:41.818 1+0 records in 00:05:41.818 1+0 records out 00:05:41.818 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000305676 s, 13.4 MB/s 00:05:41.818 16:25:19 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:41.818 16:25:19 -- common/autotest_common.sh@884 -- # size=4096 00:05:41.818 16:25:19 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:41.818 16:25:19 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:41.818 16:25:19 -- common/autotest_common.sh@887 -- # return 0 00:05:41.818 16:25:19 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:41.818 16:25:19 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:41.818 16:25:19 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:42.077 /dev/nbd1 00:05:42.077 16:25:19 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:42.077 16:25:19 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:42.077 16:25:19 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:42.077 16:25:19 -- common/autotest_common.sh@867 -- # local i 00:05:42.077 16:25:19 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:42.077 16:25:19 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:42.077 16:25:19 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:42.077 16:25:19 -- common/autotest_common.sh@871 -- # break 00:05:42.077 16:25:19 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:42.077 16:25:19 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:42.077 16:25:19 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:42.077 1+0 records in 00:05:42.077 1+0 records out 00:05:42.077 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000342997 s, 11.9 MB/s 00:05:42.077 16:25:19 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:42.077 16:25:19 -- common/autotest_common.sh@884 -- # size=4096 00:05:42.077 16:25:19 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:42.077 16:25:19 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:42.077 16:25:19 -- common/autotest_common.sh@887 -- # return 0 00:05:42.077 16:25:19 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:42.077 16:25:19 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:42.077 16:25:19 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:42.077 16:25:19 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.078 16:25:19 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:42.645 16:25:19 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:42.645 { 00:05:42.645 "bdev_name": "Malloc0", 00:05:42.645 "nbd_device": "/dev/nbd0" 00:05:42.645 }, 00:05:42.645 { 00:05:42.645 "bdev_name": "Malloc1", 00:05:42.645 "nbd_device": "/dev/nbd1" 00:05:42.645 } 00:05:42.645 ]' 00:05:42.645 16:25:19 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:42.645 { 00:05:42.645 "bdev_name": "Malloc0", 00:05:42.645 "nbd_device": "/dev/nbd0" 00:05:42.645 }, 00:05:42.645 { 00:05:42.645 "bdev_name": "Malloc1", 00:05:42.645 "nbd_device": "/dev/nbd1" 00:05:42.645 } 00:05:42.645 ]' 00:05:42.645 16:25:19 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:42.645 16:25:19 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:42.645 /dev/nbd1' 00:05:42.645 16:25:19 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:42.645 16:25:19 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:42.645 /dev/nbd1' 00:05:42.645 16:25:19 -- bdev/nbd_common.sh@65 -- # count=2 00:05:42.645 16:25:19 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:42.645 16:25:19 -- bdev/nbd_common.sh@95 -- # count=2 00:05:42.645 16:25:19 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:42.645 16:25:19 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:42.645 16:25:19 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.645 16:25:19 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:42.645 16:25:19 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:42.645 16:25:19 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:42.645 16:25:19 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:42.645 16:25:19 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:42.645 256+0 records in 00:05:42.645 256+0 records out 00:05:42.645 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00741785 s, 141 MB/s 00:05:42.645 16:25:19 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:42.645 16:25:19 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:42.645 256+0 records in 00:05:42.645 256+0 records out 00:05:42.645 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0233246 s, 45.0 MB/s 00:05:42.645 16:25:19 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:42.645 16:25:19 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:42.645 256+0 records in 00:05:42.645 256+0 records out 00:05:42.645 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.030265 s, 34.6 MB/s 00:05:42.645 16:25:19 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:42.645 16:25:19 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.645 16:25:19 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:42.645 16:25:19 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:42.645 16:25:19 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:42.645 16:25:19 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:42.645 16:25:19 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:42.645 16:25:19 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:42.645 16:25:19 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:42.645 16:25:19 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:42.645 16:25:19 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:42.646 16:25:19 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:42.646 16:25:19 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:42.646 16:25:19 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.646 16:25:19 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.646 16:25:19 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:42.646 16:25:19 -- bdev/nbd_common.sh@51 -- # local i 00:05:42.646 16:25:19 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:42.646 16:25:19 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:42.904 16:25:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:42.904 16:25:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:42.904 16:25:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:42.904 16:25:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:42.904 16:25:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:42.904 16:25:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:42.904 16:25:20 -- bdev/nbd_common.sh@41 -- # break 00:05:42.905 16:25:20 -- bdev/nbd_common.sh@45 -- # return 0 00:05:42.905 16:25:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:42.905 16:25:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:43.164 16:25:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:43.164 16:25:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:43.164 16:25:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:43.164 16:25:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:43.164 16:25:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:43.164 16:25:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:43.164 16:25:20 -- bdev/nbd_common.sh@41 -- # break 00:05:43.164 16:25:20 -- bdev/nbd_common.sh@45 -- # return 0 00:05:43.164 16:25:20 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:43.164 16:25:20 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.164 16:25:20 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:43.423 16:25:20 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:43.423 16:25:20 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:43.423 16:25:20 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:43.423 16:25:20 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:43.423 16:25:20 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:43.423 16:25:20 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:43.423 16:25:20 -- bdev/nbd_common.sh@65 -- # true 00:05:43.423 16:25:20 -- bdev/nbd_common.sh@65 -- # count=0 00:05:43.423 16:25:20 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:43.423 16:25:20 -- bdev/nbd_common.sh@104 -- # count=0 00:05:43.423 16:25:20 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:43.423 16:25:20 -- bdev/nbd_common.sh@109 -- # return 0 00:05:43.423 16:25:20 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:43.682 16:25:21 -- event/event.sh@35 -- # sleep 3 00:05:43.940 [2024-11-16 16:25:21.419034] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:44.200 [2024-11-16 16:25:21.470994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.200 [2024-11-16 16:25:21.471020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.200 [2024-11-16 16:25:21.542536] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:44.200 [2024-11-16 16:25:21.542628] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:46.735 spdk_app_start Round 2 00:05:46.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:46.735 16:25:24 -- event/event.sh@23 -- # for i in {0..2} 00:05:46.735 16:25:24 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:46.735 16:25:24 -- event/event.sh@25 -- # waitforlisten 68905 /var/tmp/spdk-nbd.sock 00:05:46.735 16:25:24 -- common/autotest_common.sh@829 -- # '[' -z 68905 ']' 00:05:46.735 16:25:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:46.735 16:25:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:46.735 16:25:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:46.736 16:25:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:46.736 16:25:24 -- common/autotest_common.sh@10 -- # set +x 00:05:46.994 16:25:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:46.994 16:25:24 -- common/autotest_common.sh@862 -- # return 0 00:05:46.994 16:25:24 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:47.253 Malloc0 00:05:47.253 16:25:24 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:47.512 Malloc1 00:05:47.512 16:25:24 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:47.512 16:25:24 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.512 16:25:24 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:47.512 16:25:24 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:47.512 16:25:24 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.512 16:25:24 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:47.512 16:25:24 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:47.512 16:25:24 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.512 16:25:24 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:47.512 16:25:24 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:47.512 16:25:24 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.512 16:25:24 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:47.512 16:25:24 -- bdev/nbd_common.sh@12 -- # local i 00:05:47.512 16:25:24 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:47.512 16:25:24 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:47.512 16:25:24 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:47.771 /dev/nbd0 00:05:47.771 16:25:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:47.771 16:25:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:47.771 16:25:25 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:47.771 16:25:25 -- common/autotest_common.sh@867 -- # local i 00:05:47.771 16:25:25 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:47.771 16:25:25 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:47.771 16:25:25 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:47.771 16:25:25 -- common/autotest_common.sh@871 -- # break 00:05:47.771 16:25:25 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:47.771 16:25:25 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:47.771 16:25:25 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:47.771 1+0 records in 00:05:47.771 1+0 records out 00:05:47.771 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000507452 s, 8.1 MB/s 00:05:47.771 16:25:25 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:47.771 16:25:25 -- common/autotest_common.sh@884 -- # size=4096 00:05:47.771 16:25:25 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:47.771 16:25:25 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:47.771 16:25:25 -- common/autotest_common.sh@887 -- # return 0 00:05:47.771 16:25:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:47.771 16:25:25 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:47.771 16:25:25 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:48.030 /dev/nbd1 00:05:48.030 16:25:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:48.030 16:25:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:48.030 16:25:25 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:48.030 16:25:25 -- common/autotest_common.sh@867 -- # local i 00:05:48.030 16:25:25 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:48.030 16:25:25 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:48.030 16:25:25 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:48.030 16:25:25 -- common/autotest_common.sh@871 -- # break 00:05:48.030 16:25:25 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:48.030 16:25:25 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:48.030 16:25:25 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:48.030 1+0 records in 00:05:48.030 1+0 records out 00:05:48.030 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000326827 s, 12.5 MB/s 00:05:48.030 16:25:25 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:48.030 16:25:25 -- common/autotest_common.sh@884 -- # size=4096 00:05:48.030 16:25:25 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:48.030 16:25:25 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:48.030 16:25:25 -- common/autotest_common.sh@887 -- # return 0 00:05:48.030 16:25:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:48.030 16:25:25 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:48.030 16:25:25 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:48.030 16:25:25 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.030 16:25:25 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:48.289 16:25:25 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:48.289 { 00:05:48.289 "bdev_name": "Malloc0", 00:05:48.289 "nbd_device": "/dev/nbd0" 00:05:48.289 }, 00:05:48.289 { 00:05:48.289 "bdev_name": "Malloc1", 00:05:48.289 "nbd_device": "/dev/nbd1" 00:05:48.289 } 00:05:48.289 ]' 00:05:48.289 16:25:25 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:48.289 { 00:05:48.289 "bdev_name": "Malloc0", 00:05:48.289 "nbd_device": "/dev/nbd0" 00:05:48.289 }, 00:05:48.289 { 00:05:48.289 "bdev_name": "Malloc1", 00:05:48.289 "nbd_device": "/dev/nbd1" 00:05:48.289 } 00:05:48.289 ]' 00:05:48.289 16:25:25 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:48.289 16:25:25 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:48.289 /dev/nbd1' 00:05:48.289 16:25:25 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:48.289 /dev/nbd1' 00:05:48.289 16:25:25 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:48.289 16:25:25 -- bdev/nbd_common.sh@65 -- # count=2 00:05:48.289 16:25:25 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:48.289 16:25:25 -- bdev/nbd_common.sh@95 -- # count=2 00:05:48.289 16:25:25 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:48.289 16:25:25 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:48.289 16:25:25 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.289 16:25:25 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:48.289 16:25:25 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:48.289 16:25:25 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:48.289 16:25:25 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:48.289 16:25:25 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:48.289 256+0 records in 00:05:48.289 256+0 records out 00:05:48.289 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00928835 s, 113 MB/s 00:05:48.289 16:25:25 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:48.289 16:25:25 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:48.289 256+0 records in 00:05:48.289 256+0 records out 00:05:48.289 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0249552 s, 42.0 MB/s 00:05:48.289 16:25:25 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:48.289 16:25:25 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:48.548 256+0 records in 00:05:48.548 256+0 records out 00:05:48.548 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.026662 s, 39.3 MB/s 00:05:48.548 16:25:25 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:48.548 16:25:25 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.548 16:25:25 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:48.548 16:25:25 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:48.549 16:25:25 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:48.549 16:25:25 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:48.549 16:25:25 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:48.549 16:25:25 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:48.549 16:25:25 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:48.549 16:25:25 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:48.549 16:25:25 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:48.549 16:25:25 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:48.549 16:25:25 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:48.549 16:25:25 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.549 16:25:25 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.549 16:25:25 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:48.549 16:25:25 -- bdev/nbd_common.sh@51 -- # local i 00:05:48.549 16:25:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:48.549 16:25:25 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:48.807 16:25:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:48.807 16:25:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:48.807 16:25:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:48.807 16:25:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:48.807 16:25:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:48.807 16:25:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:48.807 16:25:26 -- bdev/nbd_common.sh@41 -- # break 00:05:48.807 16:25:26 -- bdev/nbd_common.sh@45 -- # return 0 00:05:48.807 16:25:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:48.807 16:25:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:49.067 16:25:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:49.067 16:25:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:49.067 16:25:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:49.067 16:25:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:49.067 16:25:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:49.067 16:25:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:49.067 16:25:26 -- bdev/nbd_common.sh@41 -- # break 00:05:49.067 16:25:26 -- bdev/nbd_common.sh@45 -- # return 0 00:05:49.067 16:25:26 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:49.067 16:25:26 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.067 16:25:26 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:49.325 16:25:26 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:49.325 16:25:26 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:49.325 16:25:26 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:49.325 16:25:26 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:49.325 16:25:26 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:49.325 16:25:26 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:49.325 16:25:26 -- bdev/nbd_common.sh@65 -- # true 00:05:49.325 16:25:26 -- bdev/nbd_common.sh@65 -- # count=0 00:05:49.325 16:25:26 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:49.325 16:25:26 -- bdev/nbd_common.sh@104 -- # count=0 00:05:49.325 16:25:26 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:49.325 16:25:26 -- bdev/nbd_common.sh@109 -- # return 0 00:05:49.325 16:25:26 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:49.584 16:25:26 -- event/event.sh@35 -- # sleep 3 00:05:49.842 [2024-11-16 16:25:27.218197] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:49.842 [2024-11-16 16:25:27.270474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.842 [2024-11-16 16:25:27.270490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.100 [2024-11-16 16:25:27.341831] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:50.100 [2024-11-16 16:25:27.341921] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:52.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:52.630 16:25:29 -- event/event.sh@38 -- # waitforlisten 68905 /var/tmp/spdk-nbd.sock 00:05:52.630 16:25:29 -- common/autotest_common.sh@829 -- # '[' -z 68905 ']' 00:05:52.630 16:25:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:52.630 16:25:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:52.630 16:25:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:52.630 16:25:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:52.630 16:25:29 -- common/autotest_common.sh@10 -- # set +x 00:05:52.888 16:25:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:52.888 16:25:30 -- common/autotest_common.sh@862 -- # return 0 00:05:52.888 16:25:30 -- event/event.sh@39 -- # killprocess 68905 00:05:52.888 16:25:30 -- common/autotest_common.sh@936 -- # '[' -z 68905 ']' 00:05:52.888 16:25:30 -- common/autotest_common.sh@940 -- # kill -0 68905 00:05:52.888 16:25:30 -- common/autotest_common.sh@941 -- # uname 00:05:52.888 16:25:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:52.888 16:25:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68905 00:05:52.888 killing process with pid 68905 00:05:52.888 16:25:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:52.888 16:25:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:52.888 16:25:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68905' 00:05:52.888 16:25:30 -- common/autotest_common.sh@955 -- # kill 68905 00:05:52.888 16:25:30 -- common/autotest_common.sh@960 -- # wait 68905 00:05:53.147 spdk_app_start is called in Round 0. 00:05:53.147 Shutdown signal received, stop current app iteration 00:05:53.147 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:05:53.147 spdk_app_start is called in Round 1. 00:05:53.147 Shutdown signal received, stop current app iteration 00:05:53.147 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:05:53.147 spdk_app_start is called in Round 2. 00:05:53.147 Shutdown signal received, stop current app iteration 00:05:53.147 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:05:53.147 spdk_app_start is called in Round 3. 00:05:53.147 Shutdown signal received, stop current app iteration 00:05:53.147 16:25:30 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:53.147 16:25:30 -- event/event.sh@42 -- # return 0 00:05:53.147 00:05:53.147 real 0m18.934s 00:05:53.147 user 0m42.444s 00:05:53.147 sys 0m2.936s 00:05:53.147 16:25:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:53.147 ************************************ 00:05:53.147 END TEST app_repeat 00:05:53.147 ************************************ 00:05:53.147 16:25:30 -- common/autotest_common.sh@10 -- # set +x 00:05:53.147 16:25:30 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:53.147 16:25:30 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:53.147 16:25:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:53.147 16:25:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:53.147 16:25:30 -- common/autotest_common.sh@10 -- # set +x 00:05:53.147 ************************************ 00:05:53.147 START TEST cpu_locks 00:05:53.147 ************************************ 00:05:53.147 16:25:30 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:53.406 * Looking for test storage... 00:05:53.406 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:53.406 16:25:30 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:53.406 16:25:30 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:53.406 16:25:30 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:53.406 16:25:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:53.406 16:25:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:53.406 16:25:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:53.406 16:25:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:53.406 16:25:30 -- scripts/common.sh@335 -- # IFS=.-: 00:05:53.406 16:25:30 -- scripts/common.sh@335 -- # read -ra ver1 00:05:53.406 16:25:30 -- scripts/common.sh@336 -- # IFS=.-: 00:05:53.406 16:25:30 -- scripts/common.sh@336 -- # read -ra ver2 00:05:53.406 16:25:30 -- scripts/common.sh@337 -- # local 'op=<' 00:05:53.406 16:25:30 -- scripts/common.sh@339 -- # ver1_l=2 00:05:53.406 16:25:30 -- scripts/common.sh@340 -- # ver2_l=1 00:05:53.406 16:25:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:53.406 16:25:30 -- scripts/common.sh@343 -- # case "$op" in 00:05:53.406 16:25:30 -- scripts/common.sh@344 -- # : 1 00:05:53.406 16:25:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:53.406 16:25:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:53.406 16:25:30 -- scripts/common.sh@364 -- # decimal 1 00:05:53.406 16:25:30 -- scripts/common.sh@352 -- # local d=1 00:05:53.406 16:25:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:53.406 16:25:30 -- scripts/common.sh@354 -- # echo 1 00:05:53.406 16:25:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:53.406 16:25:30 -- scripts/common.sh@365 -- # decimal 2 00:05:53.406 16:25:30 -- scripts/common.sh@352 -- # local d=2 00:05:53.406 16:25:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:53.406 16:25:30 -- scripts/common.sh@354 -- # echo 2 00:05:53.406 16:25:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:53.406 16:25:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:53.406 16:25:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:53.406 16:25:30 -- scripts/common.sh@367 -- # return 0 00:05:53.406 16:25:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:53.406 16:25:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:53.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.406 --rc genhtml_branch_coverage=1 00:05:53.406 --rc genhtml_function_coverage=1 00:05:53.406 --rc genhtml_legend=1 00:05:53.406 --rc geninfo_all_blocks=1 00:05:53.406 --rc geninfo_unexecuted_blocks=1 00:05:53.406 00:05:53.406 ' 00:05:53.406 16:25:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:53.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.406 --rc genhtml_branch_coverage=1 00:05:53.406 --rc genhtml_function_coverage=1 00:05:53.406 --rc genhtml_legend=1 00:05:53.406 --rc geninfo_all_blocks=1 00:05:53.406 --rc geninfo_unexecuted_blocks=1 00:05:53.406 00:05:53.406 ' 00:05:53.406 16:25:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:53.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.406 --rc genhtml_branch_coverage=1 00:05:53.406 --rc genhtml_function_coverage=1 00:05:53.406 --rc genhtml_legend=1 00:05:53.406 --rc geninfo_all_blocks=1 00:05:53.406 --rc geninfo_unexecuted_blocks=1 00:05:53.406 00:05:53.406 ' 00:05:53.406 16:25:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:53.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.406 --rc genhtml_branch_coverage=1 00:05:53.406 --rc genhtml_function_coverage=1 00:05:53.406 --rc genhtml_legend=1 00:05:53.406 --rc geninfo_all_blocks=1 00:05:53.406 --rc geninfo_unexecuted_blocks=1 00:05:53.406 00:05:53.406 ' 00:05:53.406 16:25:30 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:53.406 16:25:30 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:53.406 16:25:30 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:53.406 16:25:30 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:53.406 16:25:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:53.406 16:25:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:53.406 16:25:30 -- common/autotest_common.sh@10 -- # set +x 00:05:53.406 ************************************ 00:05:53.406 START TEST default_locks 00:05:53.406 ************************************ 00:05:53.406 16:25:30 -- common/autotest_common.sh@1114 -- # default_locks 00:05:53.406 16:25:30 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=69537 00:05:53.406 16:25:30 -- event/cpu_locks.sh@47 -- # waitforlisten 69537 00:05:53.406 16:25:30 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:53.406 16:25:30 -- common/autotest_common.sh@829 -- # '[' -z 69537 ']' 00:05:53.406 16:25:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.406 16:25:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:53.407 16:25:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.407 16:25:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:53.407 16:25:30 -- common/autotest_common.sh@10 -- # set +x 00:05:53.407 [2024-11-16 16:25:30.835611] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:53.407 [2024-11-16 16:25:30.835696] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69537 ] 00:05:53.665 [2024-11-16 16:25:30.963262] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.665 [2024-11-16 16:25:31.034028] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:53.665 [2024-11-16 16:25:31.034215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.601 16:25:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:54.601 16:25:31 -- common/autotest_common.sh@862 -- # return 0 00:05:54.601 16:25:31 -- event/cpu_locks.sh@49 -- # locks_exist 69537 00:05:54.601 16:25:31 -- event/cpu_locks.sh@22 -- # lslocks -p 69537 00:05:54.601 16:25:31 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:54.859 16:25:32 -- event/cpu_locks.sh@50 -- # killprocess 69537 00:05:54.859 16:25:32 -- common/autotest_common.sh@936 -- # '[' -z 69537 ']' 00:05:54.859 16:25:32 -- common/autotest_common.sh@940 -- # kill -0 69537 00:05:54.859 16:25:32 -- common/autotest_common.sh@941 -- # uname 00:05:54.859 16:25:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:54.859 16:25:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69537 00:05:54.859 16:25:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:54.859 killing process with pid 69537 00:05:54.859 16:25:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:54.859 16:25:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69537' 00:05:54.859 16:25:32 -- common/autotest_common.sh@955 -- # kill 69537 00:05:54.859 16:25:32 -- common/autotest_common.sh@960 -- # wait 69537 00:05:55.427 16:25:32 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 69537 00:05:55.427 16:25:32 -- common/autotest_common.sh@650 -- # local es=0 00:05:55.427 16:25:32 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 69537 00:05:55.427 16:25:32 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:55.427 16:25:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:55.427 16:25:32 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:55.427 16:25:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:55.427 16:25:32 -- common/autotest_common.sh@653 -- # waitforlisten 69537 00:05:55.427 16:25:32 -- common/autotest_common.sh@829 -- # '[' -z 69537 ']' 00:05:55.427 16:25:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.427 16:25:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:55.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.427 16:25:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.427 16:25:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:55.427 16:25:32 -- common/autotest_common.sh@10 -- # set +x 00:05:55.427 ERROR: process (pid: 69537) is no longer running 00:05:55.427 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (69537) - No such process 00:05:55.427 16:25:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:55.427 16:25:32 -- common/autotest_common.sh@862 -- # return 1 00:05:55.427 16:25:32 -- common/autotest_common.sh@653 -- # es=1 00:05:55.427 16:25:32 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:55.427 16:25:32 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:55.427 16:25:32 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:55.427 16:25:32 -- event/cpu_locks.sh@54 -- # no_locks 00:05:55.427 16:25:32 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:55.427 16:25:32 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:55.427 16:25:32 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:55.427 00:05:55.427 real 0m1.998s 00:05:55.427 user 0m2.049s 00:05:55.427 sys 0m0.644s 00:05:55.427 16:25:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:55.427 16:25:32 -- common/autotest_common.sh@10 -- # set +x 00:05:55.427 ************************************ 00:05:55.427 END TEST default_locks 00:05:55.427 ************************************ 00:05:55.427 16:25:32 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:55.427 16:25:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:55.427 16:25:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:55.427 16:25:32 -- common/autotest_common.sh@10 -- # set +x 00:05:55.427 ************************************ 00:05:55.427 START TEST default_locks_via_rpc 00:05:55.427 ************************************ 00:05:55.427 16:25:32 -- common/autotest_common.sh@1114 -- # default_locks_via_rpc 00:05:55.427 16:25:32 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=69601 00:05:55.427 16:25:32 -- event/cpu_locks.sh@63 -- # waitforlisten 69601 00:05:55.427 16:25:32 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:55.427 16:25:32 -- common/autotest_common.sh@829 -- # '[' -z 69601 ']' 00:05:55.427 16:25:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.427 16:25:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:55.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.427 16:25:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.427 16:25:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:55.427 16:25:32 -- common/autotest_common.sh@10 -- # set +x 00:05:55.427 [2024-11-16 16:25:32.899824] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:55.427 [2024-11-16 16:25:32.899928] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69601 ] 00:05:55.686 [2024-11-16 16:25:33.037827] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.686 [2024-11-16 16:25:33.098961] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:55.686 [2024-11-16 16:25:33.099150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.622 16:25:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:56.622 16:25:33 -- common/autotest_common.sh@862 -- # return 0 00:05:56.622 16:25:33 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:56.622 16:25:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.622 16:25:33 -- common/autotest_common.sh@10 -- # set +x 00:05:56.622 16:25:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.622 16:25:33 -- event/cpu_locks.sh@67 -- # no_locks 00:05:56.622 16:25:33 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:56.622 16:25:33 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:56.622 16:25:33 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:56.622 16:25:33 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:56.622 16:25:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.622 16:25:33 -- common/autotest_common.sh@10 -- # set +x 00:05:56.622 16:25:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.622 16:25:33 -- event/cpu_locks.sh@71 -- # locks_exist 69601 00:05:56.622 16:25:33 -- event/cpu_locks.sh@22 -- # lslocks -p 69601 00:05:56.622 16:25:33 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:56.881 16:25:34 -- event/cpu_locks.sh@73 -- # killprocess 69601 00:05:56.881 16:25:34 -- common/autotest_common.sh@936 -- # '[' -z 69601 ']' 00:05:56.881 16:25:34 -- common/autotest_common.sh@940 -- # kill -0 69601 00:05:56.881 16:25:34 -- common/autotest_common.sh@941 -- # uname 00:05:56.881 16:25:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:56.881 16:25:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69601 00:05:57.140 16:25:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:57.140 killing process with pid 69601 00:05:57.140 16:25:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:57.140 16:25:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69601' 00:05:57.140 16:25:34 -- common/autotest_common.sh@955 -- # kill 69601 00:05:57.140 16:25:34 -- common/autotest_common.sh@960 -- # wait 69601 00:05:57.399 00:05:57.399 real 0m2.031s 00:05:57.399 user 0m2.122s 00:05:57.399 sys 0m0.660s 00:05:57.399 ************************************ 00:05:57.399 END TEST default_locks_via_rpc 00:05:57.399 ************************************ 00:05:57.399 16:25:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:57.399 16:25:34 -- common/autotest_common.sh@10 -- # set +x 00:05:57.658 16:25:34 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:57.658 16:25:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:57.658 16:25:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:57.658 16:25:34 -- common/autotest_common.sh@10 -- # set +x 00:05:57.658 ************************************ 00:05:57.658 START TEST non_locking_app_on_locked_coremask 00:05:57.658 ************************************ 00:05:57.658 16:25:34 -- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask 00:05:57.658 16:25:34 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=69670 00:05:57.658 16:25:34 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:57.658 16:25:34 -- event/cpu_locks.sh@81 -- # waitforlisten 69670 /var/tmp/spdk.sock 00:05:57.658 16:25:34 -- common/autotest_common.sh@829 -- # '[' -z 69670 ']' 00:05:57.658 16:25:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.658 16:25:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:57.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.658 16:25:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.658 16:25:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:57.658 16:25:34 -- common/autotest_common.sh@10 -- # set +x 00:05:57.658 [2024-11-16 16:25:34.982767] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:57.658 [2024-11-16 16:25:34.982863] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69670 ] 00:05:57.658 [2024-11-16 16:25:35.117916] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.917 [2024-11-16 16:25:35.176011] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:57.917 [2024-11-16 16:25:35.176204] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.484 16:25:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:58.484 16:25:35 -- common/autotest_common.sh@862 -- # return 0 00:05:58.484 16:25:35 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=69698 00:05:58.484 16:25:35 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:58.484 16:25:35 -- event/cpu_locks.sh@85 -- # waitforlisten 69698 /var/tmp/spdk2.sock 00:05:58.484 16:25:35 -- common/autotest_common.sh@829 -- # '[' -z 69698 ']' 00:05:58.484 16:25:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:58.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:58.484 16:25:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:58.484 16:25:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:58.484 16:25:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:58.484 16:25:35 -- common/autotest_common.sh@10 -- # set +x 00:05:58.743 [2024-11-16 16:25:36.003489] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:58.743 [2024-11-16 16:25:36.003566] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69698 ] 00:05:58.743 [2024-11-16 16:25:36.140156] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:58.743 [2024-11-16 16:25:36.140193] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.002 [2024-11-16 16:25:36.281834] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:59.002 [2024-11-16 16:25:36.281994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.569 16:25:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:59.569 16:25:36 -- common/autotest_common.sh@862 -- # return 0 00:05:59.569 16:25:36 -- event/cpu_locks.sh@87 -- # locks_exist 69670 00:05:59.569 16:25:36 -- event/cpu_locks.sh@22 -- # lslocks -p 69670 00:05:59.569 16:25:36 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:00.137 16:25:37 -- event/cpu_locks.sh@89 -- # killprocess 69670 00:06:00.137 16:25:37 -- common/autotest_common.sh@936 -- # '[' -z 69670 ']' 00:06:00.137 16:25:37 -- common/autotest_common.sh@940 -- # kill -0 69670 00:06:00.137 16:25:37 -- common/autotest_common.sh@941 -- # uname 00:06:00.137 16:25:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:00.137 16:25:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69670 00:06:00.137 16:25:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:00.137 16:25:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:00.137 killing process with pid 69670 00:06:00.137 16:25:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69670' 00:06:00.137 16:25:37 -- common/autotest_common.sh@955 -- # kill 69670 00:06:00.137 16:25:37 -- common/autotest_common.sh@960 -- # wait 69670 00:06:01.075 16:25:38 -- event/cpu_locks.sh@90 -- # killprocess 69698 00:06:01.075 16:25:38 -- common/autotest_common.sh@936 -- # '[' -z 69698 ']' 00:06:01.075 16:25:38 -- common/autotest_common.sh@940 -- # kill -0 69698 00:06:01.075 16:25:38 -- common/autotest_common.sh@941 -- # uname 00:06:01.075 16:25:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:01.075 16:25:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69698 00:06:01.075 killing process with pid 69698 00:06:01.075 16:25:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:01.075 16:25:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:01.075 16:25:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69698' 00:06:01.075 16:25:38 -- common/autotest_common.sh@955 -- # kill 69698 00:06:01.075 16:25:38 -- common/autotest_common.sh@960 -- # wait 69698 00:06:01.643 ************************************ 00:06:01.643 END TEST non_locking_app_on_locked_coremask 00:06:01.643 ************************************ 00:06:01.643 00:06:01.643 real 0m4.024s 00:06:01.643 user 0m4.255s 00:06:01.643 sys 0m1.111s 00:06:01.643 16:25:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:01.643 16:25:38 -- common/autotest_common.sh@10 -- # set +x 00:06:01.643 16:25:38 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:01.643 16:25:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:01.643 16:25:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:01.643 16:25:38 -- common/autotest_common.sh@10 -- # set +x 00:06:01.643 ************************************ 00:06:01.643 START TEST locking_app_on_unlocked_coremask 00:06:01.643 ************************************ 00:06:01.643 16:25:38 -- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask 00:06:01.643 16:25:38 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=69777 00:06:01.643 16:25:39 -- event/cpu_locks.sh@99 -- # waitforlisten 69777 /var/tmp/spdk.sock 00:06:01.643 16:25:39 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:01.643 16:25:39 -- common/autotest_common.sh@829 -- # '[' -z 69777 ']' 00:06:01.643 16:25:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.643 16:25:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:01.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.643 16:25:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.643 16:25:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:01.643 16:25:39 -- common/autotest_common.sh@10 -- # set +x 00:06:01.643 [2024-11-16 16:25:39.056267] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:01.643 [2024-11-16 16:25:39.056521] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69777 ] 00:06:01.901 [2024-11-16 16:25:39.188876] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:01.901 [2024-11-16 16:25:39.188915] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.901 [2024-11-16 16:25:39.256211] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:01.901 [2024-11-16 16:25:39.256745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:02.836 16:25:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:02.836 16:25:40 -- common/autotest_common.sh@862 -- # return 0 00:06:02.836 16:25:40 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=69805 00:06:02.836 16:25:40 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:02.836 16:25:40 -- event/cpu_locks.sh@103 -- # waitforlisten 69805 /var/tmp/spdk2.sock 00:06:02.836 16:25:40 -- common/autotest_common.sh@829 -- # '[' -z 69805 ']' 00:06:02.836 16:25:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:02.836 16:25:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:02.836 16:25:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:02.836 16:25:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:02.836 16:25:40 -- common/autotest_common.sh@10 -- # set +x 00:06:02.836 [2024-11-16 16:25:40.109881] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:02.836 [2024-11-16 16:25:40.110147] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69805 ] 00:06:02.836 [2024-11-16 16:25:40.251635] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.094 [2024-11-16 16:25:40.386514] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:03.094 [2024-11-16 16:25:40.386677] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.662 16:25:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:03.662 16:25:41 -- common/autotest_common.sh@862 -- # return 0 00:06:03.662 16:25:41 -- event/cpu_locks.sh@105 -- # locks_exist 69805 00:06:03.662 16:25:41 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:03.663 16:25:41 -- event/cpu_locks.sh@22 -- # lslocks -p 69805 00:06:04.599 16:25:41 -- event/cpu_locks.sh@107 -- # killprocess 69777 00:06:04.599 16:25:41 -- common/autotest_common.sh@936 -- # '[' -z 69777 ']' 00:06:04.599 16:25:41 -- common/autotest_common.sh@940 -- # kill -0 69777 00:06:04.599 16:25:41 -- common/autotest_common.sh@941 -- # uname 00:06:04.599 16:25:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:04.599 16:25:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69777 00:06:04.599 16:25:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:04.599 16:25:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:04.599 killing process with pid 69777 00:06:04.599 16:25:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69777' 00:06:04.599 16:25:41 -- common/autotest_common.sh@955 -- # kill 69777 00:06:04.599 16:25:41 -- common/autotest_common.sh@960 -- # wait 69777 00:06:05.601 16:25:42 -- event/cpu_locks.sh@108 -- # killprocess 69805 00:06:05.601 16:25:42 -- common/autotest_common.sh@936 -- # '[' -z 69805 ']' 00:06:05.601 16:25:42 -- common/autotest_common.sh@940 -- # kill -0 69805 00:06:05.601 16:25:42 -- common/autotest_common.sh@941 -- # uname 00:06:05.601 16:25:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:05.601 16:25:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69805 00:06:05.601 killing process with pid 69805 00:06:05.601 16:25:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:05.601 16:25:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:05.601 16:25:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69805' 00:06:05.601 16:25:42 -- common/autotest_common.sh@955 -- # kill 69805 00:06:05.601 16:25:42 -- common/autotest_common.sh@960 -- # wait 69805 00:06:06.168 ************************************ 00:06:06.168 END TEST locking_app_on_unlocked_coremask 00:06:06.168 ************************************ 00:06:06.168 00:06:06.168 real 0m4.367s 00:06:06.168 user 0m4.702s 00:06:06.168 sys 0m1.247s 00:06:06.168 16:25:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:06.168 16:25:43 -- common/autotest_common.sh@10 -- # set +x 00:06:06.168 16:25:43 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:06.168 16:25:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:06.168 16:25:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:06.168 16:25:43 -- common/autotest_common.sh@10 -- # set +x 00:06:06.168 ************************************ 00:06:06.168 START TEST locking_app_on_locked_coremask 00:06:06.168 ************************************ 00:06:06.168 16:25:43 -- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask 00:06:06.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.168 16:25:43 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=69890 00:06:06.168 16:25:43 -- event/cpu_locks.sh@116 -- # waitforlisten 69890 /var/tmp/spdk.sock 00:06:06.168 16:25:43 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:06.168 16:25:43 -- common/autotest_common.sh@829 -- # '[' -z 69890 ']' 00:06:06.168 16:25:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.168 16:25:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:06.168 16:25:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.168 16:25:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:06.168 16:25:43 -- common/autotest_common.sh@10 -- # set +x 00:06:06.168 [2024-11-16 16:25:43.486158] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:06.168 [2024-11-16 16:25:43.486400] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69890 ] 00:06:06.168 [2024-11-16 16:25:43.618825] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.429 [2024-11-16 16:25:43.682446] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:06.429 [2024-11-16 16:25:43.682965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.997 16:25:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:06.997 16:25:44 -- common/autotest_common.sh@862 -- # return 0 00:06:06.997 16:25:44 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:06.997 16:25:44 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=69918 00:06:06.997 16:25:44 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 69918 /var/tmp/spdk2.sock 00:06:06.997 16:25:44 -- common/autotest_common.sh@650 -- # local es=0 00:06:06.997 16:25:44 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 69918 /var/tmp/spdk2.sock 00:06:06.997 16:25:44 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:06.997 16:25:44 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:06.997 16:25:44 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:06.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:06.997 16:25:44 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:06.997 16:25:44 -- common/autotest_common.sh@653 -- # waitforlisten 69918 /var/tmp/spdk2.sock 00:06:06.997 16:25:44 -- common/autotest_common.sh@829 -- # '[' -z 69918 ']' 00:06:06.997 16:25:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:06.997 16:25:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:06.997 16:25:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:06.997 16:25:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:06.997 16:25:44 -- common/autotest_common.sh@10 -- # set +x 00:06:06.997 [2024-11-16 16:25:44.464988] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:06.997 [2024-11-16 16:25:44.465257] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69918 ] 00:06:07.256 [2024-11-16 16:25:44.599234] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 69890 has claimed it. 00:06:07.256 [2024-11-16 16:25:44.599302] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:07.823 ERROR: process (pid: 69918) is no longer running 00:06:07.823 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (69918) - No such process 00:06:07.823 16:25:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:07.823 16:25:45 -- common/autotest_common.sh@862 -- # return 1 00:06:07.823 16:25:45 -- common/autotest_common.sh@653 -- # es=1 00:06:07.823 16:25:45 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:07.823 16:25:45 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:07.823 16:25:45 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:07.823 16:25:45 -- event/cpu_locks.sh@122 -- # locks_exist 69890 00:06:07.823 16:25:45 -- event/cpu_locks.sh@22 -- # lslocks -p 69890 00:06:07.824 16:25:45 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:08.390 16:25:45 -- event/cpu_locks.sh@124 -- # killprocess 69890 00:06:08.390 16:25:45 -- common/autotest_common.sh@936 -- # '[' -z 69890 ']' 00:06:08.390 16:25:45 -- common/autotest_common.sh@940 -- # kill -0 69890 00:06:08.390 16:25:45 -- common/autotest_common.sh@941 -- # uname 00:06:08.390 16:25:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:08.391 16:25:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69890 00:06:08.391 killing process with pid 69890 00:06:08.391 16:25:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:08.391 16:25:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:08.391 16:25:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69890' 00:06:08.391 16:25:45 -- common/autotest_common.sh@955 -- # kill 69890 00:06:08.391 16:25:45 -- common/autotest_common.sh@960 -- # wait 69890 00:06:08.958 00:06:08.958 real 0m2.716s 00:06:08.958 user 0m2.975s 00:06:08.958 sys 0m0.718s 00:06:08.958 16:25:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:08.958 ************************************ 00:06:08.958 END TEST locking_app_on_locked_coremask 00:06:08.958 ************************************ 00:06:08.958 16:25:46 -- common/autotest_common.sh@10 -- # set +x 00:06:08.958 16:25:46 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:08.958 16:25:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:08.958 16:25:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:08.958 16:25:46 -- common/autotest_common.sh@10 -- # set +x 00:06:08.958 ************************************ 00:06:08.958 START TEST locking_overlapped_coremask 00:06:08.958 ************************************ 00:06:08.958 16:25:46 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask 00:06:08.958 16:25:46 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=69969 00:06:08.958 16:25:46 -- event/cpu_locks.sh@133 -- # waitforlisten 69969 /var/tmp/spdk.sock 00:06:08.958 16:25:46 -- common/autotest_common.sh@829 -- # '[' -z 69969 ']' 00:06:08.958 16:25:46 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:08.958 16:25:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.958 16:25:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:08.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.958 16:25:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.958 16:25:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:08.958 16:25:46 -- common/autotest_common.sh@10 -- # set +x 00:06:08.958 [2024-11-16 16:25:46.258379] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:08.958 [2024-11-16 16:25:46.258652] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69969 ] 00:06:08.958 [2024-11-16 16:25:46.395193] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:09.217 [2024-11-16 16:25:46.459009] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:09.217 [2024-11-16 16:25:46.459708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.217 [2024-11-16 16:25:46.459973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.217 [2024-11-16 16:25:46.459949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:09.785 16:25:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:09.785 16:25:47 -- common/autotest_common.sh@862 -- # return 0 00:06:09.785 16:25:47 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=70005 00:06:09.785 16:25:47 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:09.785 16:25:47 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 70005 /var/tmp/spdk2.sock 00:06:09.785 16:25:47 -- common/autotest_common.sh@650 -- # local es=0 00:06:09.785 16:25:47 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 70005 /var/tmp/spdk2.sock 00:06:09.785 16:25:47 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:09.785 16:25:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:09.785 16:25:47 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:09.785 16:25:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:09.785 16:25:47 -- common/autotest_common.sh@653 -- # waitforlisten 70005 /var/tmp/spdk2.sock 00:06:09.785 16:25:47 -- common/autotest_common.sh@829 -- # '[' -z 70005 ']' 00:06:09.785 16:25:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:09.785 16:25:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:09.785 16:25:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:09.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:09.785 16:25:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:09.785 16:25:47 -- common/autotest_common.sh@10 -- # set +x 00:06:10.045 [2024-11-16 16:25:47.286522] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:10.045 [2024-11-16 16:25:47.286863] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70005 ] 00:06:10.045 [2024-11-16 16:25:47.420558] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 69969 has claimed it. 00:06:10.045 [2024-11-16 16:25:47.424138] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:10.613 ERROR: process (pid: 70005) is no longer running 00:06:10.613 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (70005) - No such process 00:06:10.613 16:25:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:10.613 16:25:48 -- common/autotest_common.sh@862 -- # return 1 00:06:10.613 16:25:48 -- common/autotest_common.sh@653 -- # es=1 00:06:10.613 16:25:48 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:10.613 16:25:48 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:10.613 16:25:48 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:10.613 16:25:48 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:10.613 16:25:48 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:10.613 16:25:48 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:10.613 16:25:48 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:10.613 16:25:48 -- event/cpu_locks.sh@141 -- # killprocess 69969 00:06:10.613 16:25:48 -- common/autotest_common.sh@936 -- # '[' -z 69969 ']' 00:06:10.613 16:25:48 -- common/autotest_common.sh@940 -- # kill -0 69969 00:06:10.613 16:25:48 -- common/autotest_common.sh@941 -- # uname 00:06:10.613 16:25:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:10.613 16:25:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69969 00:06:10.613 16:25:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:10.613 16:25:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:10.613 16:25:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69969' 00:06:10.613 killing process with pid 69969 00:06:10.613 16:25:48 -- common/autotest_common.sh@955 -- # kill 69969 00:06:10.613 16:25:48 -- common/autotest_common.sh@960 -- # wait 69969 00:06:11.181 00:06:11.181 real 0m2.403s 00:06:11.181 user 0m6.684s 00:06:11.181 sys 0m0.522s 00:06:11.181 16:25:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:11.181 16:25:48 -- common/autotest_common.sh@10 -- # set +x 00:06:11.181 ************************************ 00:06:11.181 END TEST locking_overlapped_coremask 00:06:11.181 ************************************ 00:06:11.181 16:25:48 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:11.181 16:25:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:11.181 16:25:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:11.181 16:25:48 -- common/autotest_common.sh@10 -- # set +x 00:06:11.181 ************************************ 00:06:11.181 START TEST locking_overlapped_coremask_via_rpc 00:06:11.181 ************************************ 00:06:11.181 16:25:48 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc 00:06:11.181 16:25:48 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=70051 00:06:11.181 16:25:48 -- event/cpu_locks.sh@149 -- # waitforlisten 70051 /var/tmp/spdk.sock 00:06:11.181 16:25:48 -- common/autotest_common.sh@829 -- # '[' -z 70051 ']' 00:06:11.181 16:25:48 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:11.181 16:25:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.181 16:25:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:11.181 16:25:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.181 16:25:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:11.181 16:25:48 -- common/autotest_common.sh@10 -- # set +x 00:06:11.440 [2024-11-16 16:25:48.723299] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:11.440 [2024-11-16 16:25:48.723583] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70051 ] 00:06:11.440 [2024-11-16 16:25:48.865877] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:11.440 [2024-11-16 16:25:48.866270] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:11.699 [2024-11-16 16:25:48.930967] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:11.699 [2024-11-16 16:25:48.931667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.699 [2024-11-16 16:25:48.931748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:11.699 [2024-11-16 16:25:48.931761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.265 16:25:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:12.265 16:25:49 -- common/autotest_common.sh@862 -- # return 0 00:06:12.265 16:25:49 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=70081 00:06:12.265 16:25:49 -- event/cpu_locks.sh@153 -- # waitforlisten 70081 /var/tmp/spdk2.sock 00:06:12.265 16:25:49 -- common/autotest_common.sh@829 -- # '[' -z 70081 ']' 00:06:12.265 16:25:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:12.265 16:25:49 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:12.265 16:25:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:12.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:12.265 16:25:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:12.265 16:25:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:12.265 16:25:49 -- common/autotest_common.sh@10 -- # set +x 00:06:12.265 [2024-11-16 16:25:49.698883] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:12.265 [2024-11-16 16:25:49.699170] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70081 ] 00:06:12.525 [2024-11-16 16:25:49.838863] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:12.525 [2024-11-16 16:25:49.843080] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:12.525 [2024-11-16 16:25:49.974689] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:12.525 [2024-11-16 16:25:49.975345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:12.525 [2024-11-16 16:25:49.975442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:12.525 [2024-11-16 16:25:49.975444] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:13.462 16:25:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:13.462 16:25:50 -- common/autotest_common.sh@862 -- # return 0 00:06:13.462 16:25:50 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:13.462 16:25:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:13.462 16:25:50 -- common/autotest_common.sh@10 -- # set +x 00:06:13.462 16:25:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:13.462 16:25:50 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:13.462 16:25:50 -- common/autotest_common.sh@650 -- # local es=0 00:06:13.462 16:25:50 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:13.462 16:25:50 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:13.462 16:25:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:13.462 16:25:50 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:13.462 16:25:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:13.462 16:25:50 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:13.462 16:25:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:13.462 16:25:50 -- common/autotest_common.sh@10 -- # set +x 00:06:13.462 [2024-11-16 16:25:50.705309] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 70051 has claimed it. 00:06:13.463 2024/11/16 16:25:50 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:06:13.463 request: 00:06:13.463 { 00:06:13.463 "method": "framework_enable_cpumask_locks", 00:06:13.463 "params": {} 00:06:13.463 } 00:06:13.463 Got JSON-RPC error response 00:06:13.463 GoRPCClient: error on JSON-RPC call 00:06:13.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.463 16:25:50 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:13.463 16:25:50 -- common/autotest_common.sh@653 -- # es=1 00:06:13.463 16:25:50 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:13.463 16:25:50 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:13.463 16:25:50 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:13.463 16:25:50 -- event/cpu_locks.sh@158 -- # waitforlisten 70051 /var/tmp/spdk.sock 00:06:13.463 16:25:50 -- common/autotest_common.sh@829 -- # '[' -z 70051 ']' 00:06:13.463 16:25:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.463 16:25:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:13.463 16:25:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.463 16:25:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:13.463 16:25:50 -- common/autotest_common.sh@10 -- # set +x 00:06:13.463 16:25:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:13.463 16:25:50 -- common/autotest_common.sh@862 -- # return 0 00:06:13.463 16:25:50 -- event/cpu_locks.sh@159 -- # waitforlisten 70081 /var/tmp/spdk2.sock 00:06:13.463 16:25:50 -- common/autotest_common.sh@829 -- # '[' -z 70081 ']' 00:06:13.463 16:25:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:13.463 16:25:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:13.463 16:25:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:13.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:13.463 16:25:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:13.463 16:25:50 -- common/autotest_common.sh@10 -- # set +x 00:06:13.722 16:25:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:13.722 16:25:51 -- common/autotest_common.sh@862 -- # return 0 00:06:13.722 ************************************ 00:06:13.722 END TEST locking_overlapped_coremask_via_rpc 00:06:13.722 ************************************ 00:06:13.722 16:25:51 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:13.722 16:25:51 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:13.722 16:25:51 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:13.722 16:25:51 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:13.722 00:06:13.722 real 0m2.542s 00:06:13.722 user 0m1.287s 00:06:13.722 sys 0m0.189s 00:06:13.722 16:25:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:13.722 16:25:51 -- common/autotest_common.sh@10 -- # set +x 00:06:13.980 16:25:51 -- event/cpu_locks.sh@174 -- # cleanup 00:06:13.980 16:25:51 -- event/cpu_locks.sh@15 -- # [[ -z 70051 ]] 00:06:13.980 16:25:51 -- event/cpu_locks.sh@15 -- # killprocess 70051 00:06:13.980 16:25:51 -- common/autotest_common.sh@936 -- # '[' -z 70051 ']' 00:06:13.980 16:25:51 -- common/autotest_common.sh@940 -- # kill -0 70051 00:06:13.980 16:25:51 -- common/autotest_common.sh@941 -- # uname 00:06:13.980 16:25:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:13.980 16:25:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70051 00:06:13.981 killing process with pid 70051 00:06:13.981 16:25:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:13.981 16:25:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:13.981 16:25:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70051' 00:06:13.981 16:25:51 -- common/autotest_common.sh@955 -- # kill 70051 00:06:13.981 16:25:51 -- common/autotest_common.sh@960 -- # wait 70051 00:06:14.547 16:25:51 -- event/cpu_locks.sh@16 -- # [[ -z 70081 ]] 00:06:14.547 16:25:51 -- event/cpu_locks.sh@16 -- # killprocess 70081 00:06:14.547 16:25:51 -- common/autotest_common.sh@936 -- # '[' -z 70081 ']' 00:06:14.547 16:25:51 -- common/autotest_common.sh@940 -- # kill -0 70081 00:06:14.547 16:25:51 -- common/autotest_common.sh@941 -- # uname 00:06:14.547 16:25:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:14.547 16:25:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70081 00:06:14.547 killing process with pid 70081 00:06:14.547 16:25:51 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:14.547 16:25:51 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:14.547 16:25:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70081' 00:06:14.547 16:25:51 -- common/autotest_common.sh@955 -- # kill 70081 00:06:14.547 16:25:51 -- common/autotest_common.sh@960 -- # wait 70081 00:06:14.804 16:25:52 -- event/cpu_locks.sh@18 -- # rm -f 00:06:14.804 16:25:52 -- event/cpu_locks.sh@1 -- # cleanup 00:06:14.804 16:25:52 -- event/cpu_locks.sh@15 -- # [[ -z 70051 ]] 00:06:14.804 16:25:52 -- event/cpu_locks.sh@15 -- # killprocess 70051 00:06:14.804 16:25:52 -- common/autotest_common.sh@936 -- # '[' -z 70051 ']' 00:06:14.804 16:25:52 -- common/autotest_common.sh@940 -- # kill -0 70051 00:06:14.804 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (70051) - No such process 00:06:14.804 Process with pid 70051 is not found 00:06:14.804 16:25:52 -- common/autotest_common.sh@963 -- # echo 'Process with pid 70051 is not found' 00:06:14.804 16:25:52 -- event/cpu_locks.sh@16 -- # [[ -z 70081 ]] 00:06:14.804 Process with pid 70081 is not found 00:06:14.804 16:25:52 -- event/cpu_locks.sh@16 -- # killprocess 70081 00:06:14.804 16:25:52 -- common/autotest_common.sh@936 -- # '[' -z 70081 ']' 00:06:14.804 16:25:52 -- common/autotest_common.sh@940 -- # kill -0 70081 00:06:14.804 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (70081) - No such process 00:06:14.804 16:25:52 -- common/autotest_common.sh@963 -- # echo 'Process with pid 70081 is not found' 00:06:14.804 16:25:52 -- event/cpu_locks.sh@18 -- # rm -f 00:06:14.804 00:06:14.804 real 0m21.564s 00:06:14.804 user 0m36.847s 00:06:14.804 sys 0m6.058s 00:06:14.804 16:25:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:14.804 ************************************ 00:06:14.804 END TEST cpu_locks 00:06:14.804 ************************************ 00:06:14.804 16:25:52 -- common/autotest_common.sh@10 -- # set +x 00:06:14.804 ************************************ 00:06:14.804 END TEST event 00:06:14.804 ************************************ 00:06:14.804 00:06:14.804 real 0m49.528s 00:06:14.804 user 1m34.690s 00:06:14.804 sys 0m9.891s 00:06:14.804 16:25:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:14.804 16:25:52 -- common/autotest_common.sh@10 -- # set +x 00:06:14.804 16:25:52 -- spdk/autotest.sh@175 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:14.804 16:25:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:14.805 16:25:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:14.805 16:25:52 -- common/autotest_common.sh@10 -- # set +x 00:06:14.805 ************************************ 00:06:14.805 START TEST thread 00:06:14.805 ************************************ 00:06:14.805 16:25:52 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:15.062 * Looking for test storage... 00:06:15.062 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:15.062 16:25:52 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:15.062 16:25:52 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:15.062 16:25:52 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:15.062 16:25:52 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:15.062 16:25:52 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:15.062 16:25:52 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:15.062 16:25:52 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:15.062 16:25:52 -- scripts/common.sh@335 -- # IFS=.-: 00:06:15.062 16:25:52 -- scripts/common.sh@335 -- # read -ra ver1 00:06:15.062 16:25:52 -- scripts/common.sh@336 -- # IFS=.-: 00:06:15.062 16:25:52 -- scripts/common.sh@336 -- # read -ra ver2 00:06:15.062 16:25:52 -- scripts/common.sh@337 -- # local 'op=<' 00:06:15.062 16:25:52 -- scripts/common.sh@339 -- # ver1_l=2 00:06:15.062 16:25:52 -- scripts/common.sh@340 -- # ver2_l=1 00:06:15.062 16:25:52 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:15.062 16:25:52 -- scripts/common.sh@343 -- # case "$op" in 00:06:15.062 16:25:52 -- scripts/common.sh@344 -- # : 1 00:06:15.062 16:25:52 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:15.062 16:25:52 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:15.062 16:25:52 -- scripts/common.sh@364 -- # decimal 1 00:06:15.062 16:25:52 -- scripts/common.sh@352 -- # local d=1 00:06:15.062 16:25:52 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:15.062 16:25:52 -- scripts/common.sh@354 -- # echo 1 00:06:15.062 16:25:52 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:15.062 16:25:52 -- scripts/common.sh@365 -- # decimal 2 00:06:15.062 16:25:52 -- scripts/common.sh@352 -- # local d=2 00:06:15.062 16:25:52 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:15.062 16:25:52 -- scripts/common.sh@354 -- # echo 2 00:06:15.062 16:25:52 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:15.062 16:25:52 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:15.062 16:25:52 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:15.062 16:25:52 -- scripts/common.sh@367 -- # return 0 00:06:15.062 16:25:52 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:15.062 16:25:52 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:15.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.062 --rc genhtml_branch_coverage=1 00:06:15.062 --rc genhtml_function_coverage=1 00:06:15.062 --rc genhtml_legend=1 00:06:15.062 --rc geninfo_all_blocks=1 00:06:15.062 --rc geninfo_unexecuted_blocks=1 00:06:15.062 00:06:15.062 ' 00:06:15.062 16:25:52 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:15.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.062 --rc genhtml_branch_coverage=1 00:06:15.062 --rc genhtml_function_coverage=1 00:06:15.062 --rc genhtml_legend=1 00:06:15.062 --rc geninfo_all_blocks=1 00:06:15.062 --rc geninfo_unexecuted_blocks=1 00:06:15.062 00:06:15.062 ' 00:06:15.062 16:25:52 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:15.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.062 --rc genhtml_branch_coverage=1 00:06:15.062 --rc genhtml_function_coverage=1 00:06:15.062 --rc genhtml_legend=1 00:06:15.062 --rc geninfo_all_blocks=1 00:06:15.062 --rc geninfo_unexecuted_blocks=1 00:06:15.062 00:06:15.062 ' 00:06:15.062 16:25:52 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:15.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.062 --rc genhtml_branch_coverage=1 00:06:15.062 --rc genhtml_function_coverage=1 00:06:15.062 --rc genhtml_legend=1 00:06:15.062 --rc geninfo_all_blocks=1 00:06:15.062 --rc geninfo_unexecuted_blocks=1 00:06:15.062 00:06:15.062 ' 00:06:15.062 16:25:52 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:15.062 16:25:52 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:15.062 16:25:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:15.062 16:25:52 -- common/autotest_common.sh@10 -- # set +x 00:06:15.062 ************************************ 00:06:15.062 START TEST thread_poller_perf 00:06:15.062 ************************************ 00:06:15.062 16:25:52 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:15.062 [2024-11-16 16:25:52.435286] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:15.062 [2024-11-16 16:25:52.435510] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70240 ] 00:06:15.319 [2024-11-16 16:25:52.570367] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.319 [2024-11-16 16:25:52.641402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.319 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:16.695 [2024-11-16T16:25:54.186Z] ====================================== 00:06:16.695 [2024-11-16T16:25:54.186Z] busy:2209909852 (cyc) 00:06:16.695 [2024-11-16T16:25:54.186Z] total_run_count: 389000 00:06:16.695 [2024-11-16T16:25:54.186Z] tsc_hz: 2200000000 (cyc) 00:06:16.695 [2024-11-16T16:25:54.186Z] ====================================== 00:06:16.695 [2024-11-16T16:25:54.186Z] poller_cost: 5681 (cyc), 2582 (nsec) 00:06:16.695 ************************************ 00:06:16.695 END TEST thread_poller_perf 00:06:16.695 ************************************ 00:06:16.695 00:06:16.695 real 0m1.331s 00:06:16.695 user 0m1.159s 00:06:16.695 sys 0m0.062s 00:06:16.695 16:25:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:16.695 16:25:53 -- common/autotest_common.sh@10 -- # set +x 00:06:16.695 16:25:53 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:16.695 16:25:53 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:16.695 16:25:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:16.695 16:25:53 -- common/autotest_common.sh@10 -- # set +x 00:06:16.695 ************************************ 00:06:16.695 START TEST thread_poller_perf 00:06:16.695 ************************************ 00:06:16.695 16:25:53 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:16.695 [2024-11-16 16:25:53.824487] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:16.695 [2024-11-16 16:25:53.824765] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70270 ] 00:06:16.695 [2024-11-16 16:25:53.961029] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.695 [2024-11-16 16:25:54.031069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.695 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:17.631 [2024-11-16T16:25:55.122Z] ====================================== 00:06:17.631 [2024-11-16T16:25:55.122Z] busy:2203050406 (cyc) 00:06:17.631 [2024-11-16T16:25:55.122Z] total_run_count: 5382000 00:06:17.631 [2024-11-16T16:25:55.122Z] tsc_hz: 2200000000 (cyc) 00:06:17.631 [2024-11-16T16:25:55.122Z] ====================================== 00:06:17.631 [2024-11-16T16:25:55.122Z] poller_cost: 409 (cyc), 185 (nsec) 00:06:17.631 00:06:17.631 real 0m1.292s 00:06:17.631 user 0m1.119s 00:06:17.631 sys 0m0.064s 00:06:17.631 16:25:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:17.631 16:25:55 -- common/autotest_common.sh@10 -- # set +x 00:06:17.631 ************************************ 00:06:17.631 END TEST thread_poller_perf 00:06:17.631 ************************************ 00:06:17.890 16:25:55 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:17.890 ************************************ 00:06:17.890 END TEST thread 00:06:17.890 ************************************ 00:06:17.890 00:06:17.890 real 0m2.893s 00:06:17.890 user 0m2.402s 00:06:17.890 sys 0m0.270s 00:06:17.890 16:25:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:17.890 16:25:55 -- common/autotest_common.sh@10 -- # set +x 00:06:17.890 16:25:55 -- spdk/autotest.sh@176 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:17.890 16:25:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:17.890 16:25:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:17.890 16:25:55 -- common/autotest_common.sh@10 -- # set +x 00:06:17.890 ************************************ 00:06:17.890 START TEST accel 00:06:17.890 ************************************ 00:06:17.890 16:25:55 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:17.890 * Looking for test storage... 00:06:17.890 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:17.890 16:25:55 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:17.890 16:25:55 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:17.890 16:25:55 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:17.890 16:25:55 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:17.890 16:25:55 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:17.890 16:25:55 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:17.890 16:25:55 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:17.890 16:25:55 -- scripts/common.sh@335 -- # IFS=.-: 00:06:17.890 16:25:55 -- scripts/common.sh@335 -- # read -ra ver1 00:06:17.890 16:25:55 -- scripts/common.sh@336 -- # IFS=.-: 00:06:17.890 16:25:55 -- scripts/common.sh@336 -- # read -ra ver2 00:06:17.890 16:25:55 -- scripts/common.sh@337 -- # local 'op=<' 00:06:17.890 16:25:55 -- scripts/common.sh@339 -- # ver1_l=2 00:06:17.890 16:25:55 -- scripts/common.sh@340 -- # ver2_l=1 00:06:17.890 16:25:55 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:17.890 16:25:55 -- scripts/common.sh@343 -- # case "$op" in 00:06:17.890 16:25:55 -- scripts/common.sh@344 -- # : 1 00:06:17.890 16:25:55 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:17.890 16:25:55 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:17.890 16:25:55 -- scripts/common.sh@364 -- # decimal 1 00:06:18.149 16:25:55 -- scripts/common.sh@352 -- # local d=1 00:06:18.149 16:25:55 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:18.149 16:25:55 -- scripts/common.sh@354 -- # echo 1 00:06:18.149 16:25:55 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:18.149 16:25:55 -- scripts/common.sh@365 -- # decimal 2 00:06:18.149 16:25:55 -- scripts/common.sh@352 -- # local d=2 00:06:18.149 16:25:55 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:18.149 16:25:55 -- scripts/common.sh@354 -- # echo 2 00:06:18.149 16:25:55 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:18.149 16:25:55 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:18.149 16:25:55 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:18.149 16:25:55 -- scripts/common.sh@367 -- # return 0 00:06:18.149 16:25:55 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:18.149 16:25:55 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:18.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.149 --rc genhtml_branch_coverage=1 00:06:18.149 --rc genhtml_function_coverage=1 00:06:18.149 --rc genhtml_legend=1 00:06:18.149 --rc geninfo_all_blocks=1 00:06:18.149 --rc geninfo_unexecuted_blocks=1 00:06:18.149 00:06:18.149 ' 00:06:18.149 16:25:55 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:18.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.149 --rc genhtml_branch_coverage=1 00:06:18.149 --rc genhtml_function_coverage=1 00:06:18.149 --rc genhtml_legend=1 00:06:18.149 --rc geninfo_all_blocks=1 00:06:18.149 --rc geninfo_unexecuted_blocks=1 00:06:18.149 00:06:18.149 ' 00:06:18.149 16:25:55 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:18.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.149 --rc genhtml_branch_coverage=1 00:06:18.149 --rc genhtml_function_coverage=1 00:06:18.149 --rc genhtml_legend=1 00:06:18.149 --rc geninfo_all_blocks=1 00:06:18.149 --rc geninfo_unexecuted_blocks=1 00:06:18.149 00:06:18.149 ' 00:06:18.149 16:25:55 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:18.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.149 --rc genhtml_branch_coverage=1 00:06:18.149 --rc genhtml_function_coverage=1 00:06:18.149 --rc genhtml_legend=1 00:06:18.149 --rc geninfo_all_blocks=1 00:06:18.149 --rc geninfo_unexecuted_blocks=1 00:06:18.149 00:06:18.149 ' 00:06:18.149 16:25:55 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:06:18.149 16:25:55 -- accel/accel.sh@74 -- # get_expected_opcs 00:06:18.149 16:25:55 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:18.149 16:25:55 -- accel/accel.sh@59 -- # spdk_tgt_pid=70349 00:06:18.149 16:25:55 -- accel/accel.sh@60 -- # waitforlisten 70349 00:06:18.149 16:25:55 -- common/autotest_common.sh@829 -- # '[' -z 70349 ']' 00:06:18.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.149 16:25:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.149 16:25:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:18.149 16:25:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.149 16:25:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:18.149 16:25:55 -- common/autotest_common.sh@10 -- # set +x 00:06:18.149 16:25:55 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:18.149 16:25:55 -- accel/accel.sh@58 -- # build_accel_config 00:06:18.149 16:25:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:18.149 16:25:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.149 16:25:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.149 16:25:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:18.149 16:25:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:18.149 16:25:55 -- accel/accel.sh@41 -- # local IFS=, 00:06:18.149 16:25:55 -- accel/accel.sh@42 -- # jq -r . 00:06:18.149 [2024-11-16 16:25:55.454535] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:18.149 [2024-11-16 16:25:55.454631] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70349 ] 00:06:18.149 [2024-11-16 16:25:55.593303] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.408 [2024-11-16 16:25:55.666178] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:18.408 [2024-11-16 16:25:55.666351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.975 16:25:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:18.975 16:25:56 -- common/autotest_common.sh@862 -- # return 0 00:06:18.975 16:25:56 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:18.975 16:25:56 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:06:18.975 16:25:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.975 16:25:56 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:18.975 16:25:56 -- common/autotest_common.sh@10 -- # set +x 00:06:19.234 16:25:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.234 16:25:56 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:19.234 16:25:56 -- accel/accel.sh@64 -- # IFS== 00:06:19.234 16:25:56 -- accel/accel.sh@64 -- # read -r opc module 00:06:19.234 16:25:56 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:19.234 16:25:56 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:19.234 16:25:56 -- accel/accel.sh@64 -- # IFS== 00:06:19.234 16:25:56 -- accel/accel.sh@64 -- # read -r opc module 00:06:19.234 16:25:56 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:19.234 16:25:56 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:19.234 16:25:56 -- accel/accel.sh@64 -- # IFS== 00:06:19.234 16:25:56 -- accel/accel.sh@64 -- # read -r opc module 00:06:19.234 16:25:56 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:19.234 16:25:56 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:19.234 16:25:56 -- accel/accel.sh@64 -- # IFS== 00:06:19.234 16:25:56 -- accel/accel.sh@64 -- # read -r opc module 00:06:19.234 16:25:56 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:19.234 16:25:56 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:19.234 16:25:56 -- accel/accel.sh@64 -- # IFS== 00:06:19.234 16:25:56 -- accel/accel.sh@64 -- # read -r opc module 00:06:19.234 16:25:56 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:19.234 16:25:56 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:19.234 16:25:56 -- accel/accel.sh@64 -- # IFS== 00:06:19.234 16:25:56 -- accel/accel.sh@64 -- # read -r opc module 00:06:19.234 16:25:56 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:19.234 16:25:56 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:19.234 16:25:56 -- accel/accel.sh@64 -- # IFS== 00:06:19.234 16:25:56 -- accel/accel.sh@64 -- # read -r opc module 00:06:19.234 16:25:56 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:19.234 16:25:56 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:19.234 16:25:56 -- accel/accel.sh@64 -- # IFS== 00:06:19.234 16:25:56 -- accel/accel.sh@64 -- # read -r opc module 00:06:19.234 16:25:56 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:19.234 16:25:56 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:19.234 16:25:56 -- accel/accel.sh@64 -- # IFS== 00:06:19.234 16:25:56 -- accel/accel.sh@64 -- # read -r opc module 00:06:19.234 16:25:56 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:19.234 16:25:56 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:19.234 16:25:56 -- accel/accel.sh@64 -- # IFS== 00:06:19.234 16:25:56 -- accel/accel.sh@64 -- # read -r opc module 00:06:19.234 16:25:56 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:19.234 16:25:56 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:19.234 16:25:56 -- accel/accel.sh@64 -- # IFS== 00:06:19.234 16:25:56 -- accel/accel.sh@64 -- # read -r opc module 00:06:19.234 16:25:56 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:19.234 16:25:56 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:19.234 16:25:56 -- accel/accel.sh@64 -- # IFS== 00:06:19.234 16:25:56 -- accel/accel.sh@64 -- # read -r opc module 00:06:19.234 16:25:56 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:19.234 16:25:56 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:19.234 16:25:56 -- accel/accel.sh@64 -- # IFS== 00:06:19.234 16:25:56 -- accel/accel.sh@64 -- # read -r opc module 00:06:19.234 16:25:56 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:19.234 16:25:56 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:19.234 16:25:56 -- accel/accel.sh@64 -- # IFS== 00:06:19.234 16:25:56 -- accel/accel.sh@64 -- # read -r opc module 00:06:19.234 16:25:56 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:19.234 16:25:56 -- accel/accel.sh@67 -- # killprocess 70349 00:06:19.234 16:25:56 -- common/autotest_common.sh@936 -- # '[' -z 70349 ']' 00:06:19.234 16:25:56 -- common/autotest_common.sh@940 -- # kill -0 70349 00:06:19.234 16:25:56 -- common/autotest_common.sh@941 -- # uname 00:06:19.234 16:25:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:19.234 16:25:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70349 00:06:19.234 16:25:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:19.234 killing process with pid 70349 00:06:19.234 16:25:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:19.234 16:25:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70349' 00:06:19.234 16:25:56 -- common/autotest_common.sh@955 -- # kill 70349 00:06:19.234 16:25:56 -- common/autotest_common.sh@960 -- # wait 70349 00:06:19.802 16:25:57 -- accel/accel.sh@68 -- # trap - ERR 00:06:19.802 16:25:57 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:06:19.802 16:25:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:19.802 16:25:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:19.802 16:25:57 -- common/autotest_common.sh@10 -- # set +x 00:06:19.802 16:25:57 -- common/autotest_common.sh@1114 -- # accel_perf -h 00:06:19.802 16:25:57 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:19.802 16:25:57 -- accel/accel.sh@12 -- # build_accel_config 00:06:19.802 16:25:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:19.802 16:25:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.802 16:25:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.802 16:25:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:19.802 16:25:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:19.802 16:25:57 -- accel/accel.sh@41 -- # local IFS=, 00:06:19.802 16:25:57 -- accel/accel.sh@42 -- # jq -r . 00:06:19.802 16:25:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:19.802 16:25:57 -- common/autotest_common.sh@10 -- # set +x 00:06:19.802 16:25:57 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:19.802 16:25:57 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:19.802 16:25:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:19.802 16:25:57 -- common/autotest_common.sh@10 -- # set +x 00:06:19.802 ************************************ 00:06:19.802 START TEST accel_missing_filename 00:06:19.802 ************************************ 00:06:19.802 16:25:57 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress 00:06:19.802 16:25:57 -- common/autotest_common.sh@650 -- # local es=0 00:06:19.802 16:25:57 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:19.802 16:25:57 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:19.802 16:25:57 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.802 16:25:57 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:19.802 16:25:57 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.802 16:25:57 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:06:19.802 16:25:57 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:19.802 16:25:57 -- accel/accel.sh@12 -- # build_accel_config 00:06:19.802 16:25:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:19.802 16:25:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.802 16:25:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.802 16:25:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:19.802 16:25:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:19.802 16:25:57 -- accel/accel.sh@41 -- # local IFS=, 00:06:19.802 16:25:57 -- accel/accel.sh@42 -- # jq -r . 00:06:19.802 [2024-11-16 16:25:57.141244] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:19.802 [2024-11-16 16:25:57.141328] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70423 ] 00:06:19.802 [2024-11-16 16:25:57.271422] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.061 [2024-11-16 16:25:57.341702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.061 [2024-11-16 16:25:57.414647] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:20.061 [2024-11-16 16:25:57.518824] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:20.319 A filename is required. 00:06:20.319 16:25:57 -- common/autotest_common.sh@653 -- # es=234 00:06:20.319 16:25:57 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:20.319 16:25:57 -- common/autotest_common.sh@662 -- # es=106 00:06:20.319 16:25:57 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:20.319 16:25:57 -- common/autotest_common.sh@670 -- # es=1 00:06:20.319 16:25:57 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:20.319 00:06:20.319 real 0m0.509s 00:06:20.319 user 0m0.315s 00:06:20.319 sys 0m0.141s 00:06:20.319 16:25:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:20.319 16:25:57 -- common/autotest_common.sh@10 -- # set +x 00:06:20.319 ************************************ 00:06:20.319 END TEST accel_missing_filename 00:06:20.319 ************************************ 00:06:20.319 16:25:57 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:20.319 16:25:57 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:20.319 16:25:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:20.319 16:25:57 -- common/autotest_common.sh@10 -- # set +x 00:06:20.319 ************************************ 00:06:20.319 START TEST accel_compress_verify 00:06:20.319 ************************************ 00:06:20.319 16:25:57 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:20.319 16:25:57 -- common/autotest_common.sh@650 -- # local es=0 00:06:20.319 16:25:57 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:20.319 16:25:57 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:20.319 16:25:57 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:20.319 16:25:57 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:20.319 16:25:57 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:20.319 16:25:57 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:20.319 16:25:57 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:20.319 16:25:57 -- accel/accel.sh@12 -- # build_accel_config 00:06:20.319 16:25:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:20.319 16:25:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.320 16:25:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.320 16:25:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:20.320 16:25:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:20.320 16:25:57 -- accel/accel.sh@41 -- # local IFS=, 00:06:20.320 16:25:57 -- accel/accel.sh@42 -- # jq -r . 00:06:20.320 [2024-11-16 16:25:57.704505] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:20.320 [2024-11-16 16:25:57.704596] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70453 ] 00:06:20.578 [2024-11-16 16:25:57.841628] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.578 [2024-11-16 16:25:57.916376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.578 [2024-11-16 16:25:57.990909] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:20.837 [2024-11-16 16:25:58.095102] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:20.837 00:06:20.837 Compression does not support the verify option, aborting. 00:06:20.837 16:25:58 -- common/autotest_common.sh@653 -- # es=161 00:06:20.837 16:25:58 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:20.837 16:25:58 -- common/autotest_common.sh@662 -- # es=33 00:06:20.837 16:25:58 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:20.837 16:25:58 -- common/autotest_common.sh@670 -- # es=1 00:06:20.837 16:25:58 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:20.837 00:06:20.837 real 0m0.524s 00:06:20.837 user 0m0.332s 00:06:20.837 sys 0m0.141s 00:06:20.837 16:25:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:20.837 16:25:58 -- common/autotest_common.sh@10 -- # set +x 00:06:20.837 ************************************ 00:06:20.837 END TEST accel_compress_verify 00:06:20.837 ************************************ 00:06:20.837 16:25:58 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:20.837 16:25:58 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:20.837 16:25:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:20.837 16:25:58 -- common/autotest_common.sh@10 -- # set +x 00:06:20.837 ************************************ 00:06:20.837 START TEST accel_wrong_workload 00:06:20.837 ************************************ 00:06:20.837 16:25:58 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar 00:06:20.837 16:25:58 -- common/autotest_common.sh@650 -- # local es=0 00:06:20.837 16:25:58 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:20.837 16:25:58 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:20.837 16:25:58 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:20.837 16:25:58 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:20.837 16:25:58 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:20.837 16:25:58 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:06:20.837 16:25:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:20.837 16:25:58 -- accel/accel.sh@12 -- # build_accel_config 00:06:20.837 16:25:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:20.837 16:25:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.837 16:25:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.837 16:25:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:20.837 16:25:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:20.837 16:25:58 -- accel/accel.sh@41 -- # local IFS=, 00:06:20.837 16:25:58 -- accel/accel.sh@42 -- # jq -r . 00:06:20.837 Unsupported workload type: foobar 00:06:20.837 [2024-11-16 16:25:58.277802] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:20.837 accel_perf options: 00:06:20.837 [-h help message] 00:06:20.837 [-q queue depth per core] 00:06:20.837 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:20.837 [-T number of threads per core 00:06:20.837 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:20.837 [-t time in seconds] 00:06:20.837 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:20.837 [ dif_verify, , dif_generate, dif_generate_copy 00:06:20.837 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:20.837 [-l for compress/decompress workloads, name of uncompressed input file 00:06:20.837 [-S for crc32c workload, use this seed value (default 0) 00:06:20.837 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:20.837 [-f for fill workload, use this BYTE value (default 255) 00:06:20.837 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:20.837 [-y verify result if this switch is on] 00:06:20.837 [-a tasks to allocate per core (default: same value as -q)] 00:06:20.837 Can be used to spread operations across a wider range of memory. 00:06:20.837 16:25:58 -- common/autotest_common.sh@653 -- # es=1 00:06:20.837 16:25:58 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:20.837 16:25:58 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:20.837 16:25:58 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:20.837 00:06:20.837 real 0m0.031s 00:06:20.837 user 0m0.022s 00:06:20.837 sys 0m0.009s 00:06:20.837 16:25:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:20.837 16:25:58 -- common/autotest_common.sh@10 -- # set +x 00:06:20.837 ************************************ 00:06:20.837 END TEST accel_wrong_workload 00:06:20.837 ************************************ 00:06:21.096 16:25:58 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:21.096 16:25:58 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:21.096 16:25:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:21.096 16:25:58 -- common/autotest_common.sh@10 -- # set +x 00:06:21.096 ************************************ 00:06:21.096 START TEST accel_negative_buffers 00:06:21.096 ************************************ 00:06:21.096 16:25:58 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:21.096 16:25:58 -- common/autotest_common.sh@650 -- # local es=0 00:06:21.096 16:25:58 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:21.096 16:25:58 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:21.096 16:25:58 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:21.096 16:25:58 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:21.096 16:25:58 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:21.096 16:25:58 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:06:21.096 16:25:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:21.096 16:25:58 -- accel/accel.sh@12 -- # build_accel_config 00:06:21.096 16:25:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:21.096 16:25:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.096 16:25:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.096 16:25:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:21.096 16:25:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:21.096 16:25:58 -- accel/accel.sh@41 -- # local IFS=, 00:06:21.096 16:25:58 -- accel/accel.sh@42 -- # jq -r . 00:06:21.096 -x option must be non-negative. 00:06:21.096 [2024-11-16 16:25:58.362387] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:21.096 accel_perf options: 00:06:21.096 [-h help message] 00:06:21.096 [-q queue depth per core] 00:06:21.096 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:21.096 [-T number of threads per core 00:06:21.096 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:21.096 [-t time in seconds] 00:06:21.096 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:21.096 [ dif_verify, , dif_generate, dif_generate_copy 00:06:21.096 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:21.096 [-l for compress/decompress workloads, name of uncompressed input file 00:06:21.096 [-S for crc32c workload, use this seed value (default 0) 00:06:21.096 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:21.096 [-f for fill workload, use this BYTE value (default 255) 00:06:21.096 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:21.096 [-y verify result if this switch is on] 00:06:21.096 [-a tasks to allocate per core (default: same value as -q)] 00:06:21.097 Can be used to spread operations across a wider range of memory. 00:06:21.097 16:25:58 -- common/autotest_common.sh@653 -- # es=1 00:06:21.097 16:25:58 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:21.097 16:25:58 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:21.097 16:25:58 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:21.097 00:06:21.097 real 0m0.033s 00:06:21.097 user 0m0.015s 00:06:21.097 sys 0m0.017s 00:06:21.097 16:25:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:21.097 16:25:58 -- common/autotest_common.sh@10 -- # set +x 00:06:21.097 ************************************ 00:06:21.097 END TEST accel_negative_buffers 00:06:21.097 ************************************ 00:06:21.097 16:25:58 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:21.097 16:25:58 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:21.097 16:25:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:21.097 16:25:58 -- common/autotest_common.sh@10 -- # set +x 00:06:21.097 ************************************ 00:06:21.097 START TEST accel_crc32c 00:06:21.097 ************************************ 00:06:21.097 16:25:58 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:21.097 16:25:58 -- accel/accel.sh@16 -- # local accel_opc 00:06:21.097 16:25:58 -- accel/accel.sh@17 -- # local accel_module 00:06:21.097 16:25:58 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:21.097 16:25:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:21.097 16:25:58 -- accel/accel.sh@12 -- # build_accel_config 00:06:21.097 16:25:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:21.097 16:25:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.097 16:25:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.097 16:25:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:21.097 16:25:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:21.097 16:25:58 -- accel/accel.sh@41 -- # local IFS=, 00:06:21.097 16:25:58 -- accel/accel.sh@42 -- # jq -r . 00:06:21.097 [2024-11-16 16:25:58.442486] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:21.097 [2024-11-16 16:25:58.442594] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70506 ] 00:06:21.097 [2024-11-16 16:25:58.574123] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.355 [2024-11-16 16:25:58.648124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.730 16:25:59 -- accel/accel.sh@18 -- # out=' 00:06:22.730 SPDK Configuration: 00:06:22.730 Core mask: 0x1 00:06:22.730 00:06:22.730 Accel Perf Configuration: 00:06:22.730 Workload Type: crc32c 00:06:22.730 CRC-32C seed: 32 00:06:22.730 Transfer size: 4096 bytes 00:06:22.730 Vector count 1 00:06:22.730 Module: software 00:06:22.730 Queue depth: 32 00:06:22.730 Allocate depth: 32 00:06:22.730 # threads/core: 1 00:06:22.730 Run time: 1 seconds 00:06:22.730 Verify: Yes 00:06:22.730 00:06:22.730 Running for 1 seconds... 00:06:22.730 00:06:22.730 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:22.730 ------------------------------------------------------------------------------------ 00:06:22.730 0,0 570944/s 2230 MiB/s 0 0 00:06:22.730 ==================================================================================== 00:06:22.730 Total 570944/s 2230 MiB/s 0 0' 00:06:22.730 16:25:59 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:22.730 16:25:59 -- accel/accel.sh@20 -- # IFS=: 00:06:22.730 16:25:59 -- accel/accel.sh@20 -- # read -r var val 00:06:22.730 16:25:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:22.730 16:25:59 -- accel/accel.sh@12 -- # build_accel_config 00:06:22.730 16:25:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:22.730 16:25:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.730 16:25:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.730 16:25:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:22.730 16:25:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:22.730 16:25:59 -- accel/accel.sh@41 -- # local IFS=, 00:06:22.730 16:25:59 -- accel/accel.sh@42 -- # jq -r . 00:06:22.730 [2024-11-16 16:25:59.952962] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:22.730 [2024-11-16 16:25:59.953049] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70531 ] 00:06:22.730 [2024-11-16 16:26:00.084369] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.730 [2024-11-16 16:26:00.154905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.989 16:26:00 -- accel/accel.sh@21 -- # val= 00:06:22.989 16:26:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.989 16:26:00 -- accel/accel.sh@20 -- # IFS=: 00:06:22.989 16:26:00 -- accel/accel.sh@20 -- # read -r var val 00:06:22.989 16:26:00 -- accel/accel.sh@21 -- # val= 00:06:22.989 16:26:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.989 16:26:00 -- accel/accel.sh@20 -- # IFS=: 00:06:22.989 16:26:00 -- accel/accel.sh@20 -- # read -r var val 00:06:22.989 16:26:00 -- accel/accel.sh@21 -- # val=0x1 00:06:22.989 16:26:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.989 16:26:00 -- accel/accel.sh@20 -- # IFS=: 00:06:22.989 16:26:00 -- accel/accel.sh@20 -- # read -r var val 00:06:22.989 16:26:00 -- accel/accel.sh@21 -- # val= 00:06:22.989 16:26:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.989 16:26:00 -- accel/accel.sh@20 -- # IFS=: 00:06:22.989 16:26:00 -- accel/accel.sh@20 -- # read -r var val 00:06:22.989 16:26:00 -- accel/accel.sh@21 -- # val= 00:06:22.989 16:26:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.989 16:26:00 -- accel/accel.sh@20 -- # IFS=: 00:06:22.989 16:26:00 -- accel/accel.sh@20 -- # read -r var val 00:06:22.989 16:26:00 -- accel/accel.sh@21 -- # val=crc32c 00:06:22.989 16:26:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.989 16:26:00 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:22.989 16:26:00 -- accel/accel.sh@20 -- # IFS=: 00:06:22.989 16:26:00 -- accel/accel.sh@20 -- # read -r var val 00:06:22.989 16:26:00 -- accel/accel.sh@21 -- # val=32 00:06:22.989 16:26:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.989 16:26:00 -- accel/accel.sh@20 -- # IFS=: 00:06:22.989 16:26:00 -- accel/accel.sh@20 -- # read -r var val 00:06:22.989 16:26:00 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:22.989 16:26:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.989 16:26:00 -- accel/accel.sh@20 -- # IFS=: 00:06:22.989 16:26:00 -- accel/accel.sh@20 -- # read -r var val 00:06:22.989 16:26:00 -- accel/accel.sh@21 -- # val= 00:06:22.989 16:26:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.989 16:26:00 -- accel/accel.sh@20 -- # IFS=: 00:06:22.989 16:26:00 -- accel/accel.sh@20 -- # read -r var val 00:06:22.989 16:26:00 -- accel/accel.sh@21 -- # val=software 00:06:22.989 16:26:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.989 16:26:00 -- accel/accel.sh@23 -- # accel_module=software 00:06:22.989 16:26:00 -- accel/accel.sh@20 -- # IFS=: 00:06:22.989 16:26:00 -- accel/accel.sh@20 -- # read -r var val 00:06:22.989 16:26:00 -- accel/accel.sh@21 -- # val=32 00:06:22.989 16:26:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.989 16:26:00 -- accel/accel.sh@20 -- # IFS=: 00:06:22.989 16:26:00 -- accel/accel.sh@20 -- # read -r var val 00:06:22.989 16:26:00 -- accel/accel.sh@21 -- # val=32 00:06:22.989 16:26:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.989 16:26:00 -- accel/accel.sh@20 -- # IFS=: 00:06:22.989 16:26:00 -- accel/accel.sh@20 -- # read -r var val 00:06:22.989 16:26:00 -- accel/accel.sh@21 -- # val=1 00:06:22.989 16:26:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.989 16:26:00 -- accel/accel.sh@20 -- # IFS=: 00:06:22.989 16:26:00 -- accel/accel.sh@20 -- # read -r var val 00:06:22.989 16:26:00 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:22.989 16:26:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.989 16:26:00 -- accel/accel.sh@20 -- # IFS=: 00:06:22.989 16:26:00 -- accel/accel.sh@20 -- # read -r var val 00:06:22.989 16:26:00 -- accel/accel.sh@21 -- # val=Yes 00:06:22.990 16:26:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.990 16:26:00 -- accel/accel.sh@20 -- # IFS=: 00:06:22.990 16:26:00 -- accel/accel.sh@20 -- # read -r var val 00:06:22.990 16:26:00 -- accel/accel.sh@21 -- # val= 00:06:22.990 16:26:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.990 16:26:00 -- accel/accel.sh@20 -- # IFS=: 00:06:22.990 16:26:00 -- accel/accel.sh@20 -- # read -r var val 00:06:22.990 16:26:00 -- accel/accel.sh@21 -- # val= 00:06:22.990 16:26:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.990 16:26:00 -- accel/accel.sh@20 -- # IFS=: 00:06:22.990 16:26:00 -- accel/accel.sh@20 -- # read -r var val 00:06:24.365 16:26:01 -- accel/accel.sh@21 -- # val= 00:06:24.365 16:26:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.365 16:26:01 -- accel/accel.sh@20 -- # IFS=: 00:06:24.365 16:26:01 -- accel/accel.sh@20 -- # read -r var val 00:06:24.365 16:26:01 -- accel/accel.sh@21 -- # val= 00:06:24.365 16:26:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.365 16:26:01 -- accel/accel.sh@20 -- # IFS=: 00:06:24.365 16:26:01 -- accel/accel.sh@20 -- # read -r var val 00:06:24.365 16:26:01 -- accel/accel.sh@21 -- # val= 00:06:24.365 16:26:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.365 16:26:01 -- accel/accel.sh@20 -- # IFS=: 00:06:24.365 16:26:01 -- accel/accel.sh@20 -- # read -r var val 00:06:24.365 16:26:01 -- accel/accel.sh@21 -- # val= 00:06:24.365 16:26:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.365 16:26:01 -- accel/accel.sh@20 -- # IFS=: 00:06:24.365 16:26:01 -- accel/accel.sh@20 -- # read -r var val 00:06:24.365 16:26:01 -- accel/accel.sh@21 -- # val= 00:06:24.365 16:26:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.365 16:26:01 -- accel/accel.sh@20 -- # IFS=: 00:06:24.365 16:26:01 -- accel/accel.sh@20 -- # read -r var val 00:06:24.365 16:26:01 -- accel/accel.sh@21 -- # val= 00:06:24.365 16:26:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.365 16:26:01 -- accel/accel.sh@20 -- # IFS=: 00:06:24.365 16:26:01 -- accel/accel.sh@20 -- # read -r var val 00:06:24.365 16:26:01 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:24.365 16:26:01 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:24.365 16:26:01 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:24.365 00:06:24.365 real 0m3.024s 00:06:24.365 user 0m2.533s 00:06:24.365 sys 0m0.287s 00:06:24.365 16:26:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:24.365 16:26:01 -- common/autotest_common.sh@10 -- # set +x 00:06:24.365 ************************************ 00:06:24.365 END TEST accel_crc32c 00:06:24.365 ************************************ 00:06:24.365 16:26:01 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:24.365 16:26:01 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:24.365 16:26:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:24.365 16:26:01 -- common/autotest_common.sh@10 -- # set +x 00:06:24.365 ************************************ 00:06:24.365 START TEST accel_crc32c_C2 00:06:24.365 ************************************ 00:06:24.365 16:26:01 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:24.365 16:26:01 -- accel/accel.sh@16 -- # local accel_opc 00:06:24.365 16:26:01 -- accel/accel.sh@17 -- # local accel_module 00:06:24.365 16:26:01 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:24.366 16:26:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:24.366 16:26:01 -- accel/accel.sh@12 -- # build_accel_config 00:06:24.366 16:26:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:24.366 16:26:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.366 16:26:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.366 16:26:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:24.366 16:26:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:24.366 16:26:01 -- accel/accel.sh@41 -- # local IFS=, 00:06:24.366 16:26:01 -- accel/accel.sh@42 -- # jq -r . 00:06:24.366 [2024-11-16 16:26:01.525450] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:24.366 [2024-11-16 16:26:01.525561] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70560 ] 00:06:24.366 [2024-11-16 16:26:01.662240] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.366 [2024-11-16 16:26:01.733535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.742 16:26:02 -- accel/accel.sh@18 -- # out=' 00:06:25.742 SPDK Configuration: 00:06:25.742 Core mask: 0x1 00:06:25.742 00:06:25.742 Accel Perf Configuration: 00:06:25.742 Workload Type: crc32c 00:06:25.742 CRC-32C seed: 0 00:06:25.742 Transfer size: 4096 bytes 00:06:25.742 Vector count 2 00:06:25.742 Module: software 00:06:25.742 Queue depth: 32 00:06:25.742 Allocate depth: 32 00:06:25.742 # threads/core: 1 00:06:25.742 Run time: 1 seconds 00:06:25.742 Verify: Yes 00:06:25.742 00:06:25.742 Running for 1 seconds... 00:06:25.742 00:06:25.742 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:25.742 ------------------------------------------------------------------------------------ 00:06:25.742 0,0 432800/s 3381 MiB/s 0 0 00:06:25.742 ==================================================================================== 00:06:25.742 Total 432800/s 1690 MiB/s 0 0' 00:06:25.742 16:26:02 -- accel/accel.sh@20 -- # IFS=: 00:06:25.742 16:26:02 -- accel/accel.sh@20 -- # read -r var val 00:06:25.742 16:26:02 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:25.742 16:26:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:25.742 16:26:02 -- accel/accel.sh@12 -- # build_accel_config 00:06:25.742 16:26:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:25.742 16:26:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.742 16:26:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.742 16:26:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:25.742 16:26:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:25.742 16:26:02 -- accel/accel.sh@41 -- # local IFS=, 00:06:25.742 16:26:02 -- accel/accel.sh@42 -- # jq -r . 00:06:25.742 [2024-11-16 16:26:03.015553] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:25.742 [2024-11-16 16:26:03.015652] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70585 ] 00:06:25.742 [2024-11-16 16:26:03.153336] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.742 [2024-11-16 16:26:03.218969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.001 16:26:03 -- accel/accel.sh@21 -- # val= 00:06:26.001 16:26:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.001 16:26:03 -- accel/accel.sh@20 -- # IFS=: 00:06:26.001 16:26:03 -- accel/accel.sh@20 -- # read -r var val 00:06:26.001 16:26:03 -- accel/accel.sh@21 -- # val= 00:06:26.001 16:26:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.001 16:26:03 -- accel/accel.sh@20 -- # IFS=: 00:06:26.001 16:26:03 -- accel/accel.sh@20 -- # read -r var val 00:06:26.001 16:26:03 -- accel/accel.sh@21 -- # val=0x1 00:06:26.001 16:26:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.001 16:26:03 -- accel/accel.sh@20 -- # IFS=: 00:06:26.001 16:26:03 -- accel/accel.sh@20 -- # read -r var val 00:06:26.001 16:26:03 -- accel/accel.sh@21 -- # val= 00:06:26.001 16:26:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.001 16:26:03 -- accel/accel.sh@20 -- # IFS=: 00:06:26.001 16:26:03 -- accel/accel.sh@20 -- # read -r var val 00:06:26.001 16:26:03 -- accel/accel.sh@21 -- # val= 00:06:26.001 16:26:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.001 16:26:03 -- accel/accel.sh@20 -- # IFS=: 00:06:26.001 16:26:03 -- accel/accel.sh@20 -- # read -r var val 00:06:26.001 16:26:03 -- accel/accel.sh@21 -- # val=crc32c 00:06:26.001 16:26:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.001 16:26:03 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:26.001 16:26:03 -- accel/accel.sh@20 -- # IFS=: 00:06:26.001 16:26:03 -- accel/accel.sh@20 -- # read -r var val 00:06:26.001 16:26:03 -- accel/accel.sh@21 -- # val=0 00:06:26.001 16:26:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.001 16:26:03 -- accel/accel.sh@20 -- # IFS=: 00:06:26.001 16:26:03 -- accel/accel.sh@20 -- # read -r var val 00:06:26.001 16:26:03 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:26.001 16:26:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.001 16:26:03 -- accel/accel.sh@20 -- # IFS=: 00:06:26.001 16:26:03 -- accel/accel.sh@20 -- # read -r var val 00:06:26.001 16:26:03 -- accel/accel.sh@21 -- # val= 00:06:26.001 16:26:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.001 16:26:03 -- accel/accel.sh@20 -- # IFS=: 00:06:26.001 16:26:03 -- accel/accel.sh@20 -- # read -r var val 00:06:26.001 16:26:03 -- accel/accel.sh@21 -- # val=software 00:06:26.001 16:26:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.001 16:26:03 -- accel/accel.sh@23 -- # accel_module=software 00:06:26.001 16:26:03 -- accel/accel.sh@20 -- # IFS=: 00:06:26.001 16:26:03 -- accel/accel.sh@20 -- # read -r var val 00:06:26.001 16:26:03 -- accel/accel.sh@21 -- # val=32 00:06:26.001 16:26:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.001 16:26:03 -- accel/accel.sh@20 -- # IFS=: 00:06:26.001 16:26:03 -- accel/accel.sh@20 -- # read -r var val 00:06:26.001 16:26:03 -- accel/accel.sh@21 -- # val=32 00:06:26.001 16:26:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.001 16:26:03 -- accel/accel.sh@20 -- # IFS=: 00:06:26.001 16:26:03 -- accel/accel.sh@20 -- # read -r var val 00:06:26.001 16:26:03 -- accel/accel.sh@21 -- # val=1 00:06:26.001 16:26:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.001 16:26:03 -- accel/accel.sh@20 -- # IFS=: 00:06:26.001 16:26:03 -- accel/accel.sh@20 -- # read -r var val 00:06:26.001 16:26:03 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:26.001 16:26:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.001 16:26:03 -- accel/accel.sh@20 -- # IFS=: 00:06:26.001 16:26:03 -- accel/accel.sh@20 -- # read -r var val 00:06:26.001 16:26:03 -- accel/accel.sh@21 -- # val=Yes 00:06:26.001 16:26:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.001 16:26:03 -- accel/accel.sh@20 -- # IFS=: 00:06:26.001 16:26:03 -- accel/accel.sh@20 -- # read -r var val 00:06:26.001 16:26:03 -- accel/accel.sh@21 -- # val= 00:06:26.001 16:26:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.001 16:26:03 -- accel/accel.sh@20 -- # IFS=: 00:06:26.001 16:26:03 -- accel/accel.sh@20 -- # read -r var val 00:06:26.001 16:26:03 -- accel/accel.sh@21 -- # val= 00:06:26.001 16:26:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.001 16:26:03 -- accel/accel.sh@20 -- # IFS=: 00:06:26.001 16:26:03 -- accel/accel.sh@20 -- # read -r var val 00:06:27.378 16:26:04 -- accel/accel.sh@21 -- # val= 00:06:27.378 16:26:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.378 16:26:04 -- accel/accel.sh@20 -- # IFS=: 00:06:27.378 16:26:04 -- accel/accel.sh@20 -- # read -r var val 00:06:27.378 16:26:04 -- accel/accel.sh@21 -- # val= 00:06:27.378 16:26:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.378 16:26:04 -- accel/accel.sh@20 -- # IFS=: 00:06:27.378 16:26:04 -- accel/accel.sh@20 -- # read -r var val 00:06:27.378 16:26:04 -- accel/accel.sh@21 -- # val= 00:06:27.378 16:26:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.378 16:26:04 -- accel/accel.sh@20 -- # IFS=: 00:06:27.378 16:26:04 -- accel/accel.sh@20 -- # read -r var val 00:06:27.378 16:26:04 -- accel/accel.sh@21 -- # val= 00:06:27.378 16:26:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.378 16:26:04 -- accel/accel.sh@20 -- # IFS=: 00:06:27.378 16:26:04 -- accel/accel.sh@20 -- # read -r var val 00:06:27.378 16:26:04 -- accel/accel.sh@21 -- # val= 00:06:27.378 16:26:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.378 16:26:04 -- accel/accel.sh@20 -- # IFS=: 00:06:27.378 16:26:04 -- accel/accel.sh@20 -- # read -r var val 00:06:27.378 16:26:04 -- accel/accel.sh@21 -- # val= 00:06:27.378 16:26:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.378 16:26:04 -- accel/accel.sh@20 -- # IFS=: 00:06:27.378 16:26:04 -- accel/accel.sh@20 -- # read -r var val 00:06:27.378 16:26:04 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:27.378 16:26:04 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:27.378 16:26:04 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:27.378 00:06:27.378 real 0m3.007s 00:06:27.378 user 0m2.509s 00:06:27.378 sys 0m0.289s 00:06:27.378 16:26:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:27.378 16:26:04 -- common/autotest_common.sh@10 -- # set +x 00:06:27.378 ************************************ 00:06:27.378 END TEST accel_crc32c_C2 00:06:27.378 ************************************ 00:06:27.378 16:26:04 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:27.378 16:26:04 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:27.378 16:26:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:27.378 16:26:04 -- common/autotest_common.sh@10 -- # set +x 00:06:27.378 ************************************ 00:06:27.378 START TEST accel_copy 00:06:27.378 ************************************ 00:06:27.378 16:26:04 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y 00:06:27.378 16:26:04 -- accel/accel.sh@16 -- # local accel_opc 00:06:27.378 16:26:04 -- accel/accel.sh@17 -- # local accel_module 00:06:27.378 16:26:04 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:06:27.378 16:26:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:27.378 16:26:04 -- accel/accel.sh@12 -- # build_accel_config 00:06:27.378 16:26:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:27.378 16:26:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.378 16:26:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.378 16:26:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:27.379 16:26:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:27.379 16:26:04 -- accel/accel.sh@41 -- # local IFS=, 00:06:27.379 16:26:04 -- accel/accel.sh@42 -- # jq -r . 00:06:27.379 [2024-11-16 16:26:04.586572] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:27.379 [2024-11-16 16:26:04.586660] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70614 ] 00:06:27.379 [2024-11-16 16:26:04.725127] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.379 [2024-11-16 16:26:04.795570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.754 16:26:06 -- accel/accel.sh@18 -- # out=' 00:06:28.754 SPDK Configuration: 00:06:28.754 Core mask: 0x1 00:06:28.754 00:06:28.754 Accel Perf Configuration: 00:06:28.754 Workload Type: copy 00:06:28.754 Transfer size: 4096 bytes 00:06:28.754 Vector count 1 00:06:28.754 Module: software 00:06:28.754 Queue depth: 32 00:06:28.754 Allocate depth: 32 00:06:28.754 # threads/core: 1 00:06:28.754 Run time: 1 seconds 00:06:28.754 Verify: Yes 00:06:28.754 00:06:28.754 Running for 1 seconds... 00:06:28.754 00:06:28.754 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:28.754 ------------------------------------------------------------------------------------ 00:06:28.754 0,0 394624/s 1541 MiB/s 0 0 00:06:28.754 ==================================================================================== 00:06:28.754 Total 394624/s 1541 MiB/s 0 0' 00:06:28.754 16:26:06 -- accel/accel.sh@20 -- # IFS=: 00:06:28.754 16:26:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:28.754 16:26:06 -- accel/accel.sh@20 -- # read -r var val 00:06:28.754 16:26:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:28.754 16:26:06 -- accel/accel.sh@12 -- # build_accel_config 00:06:28.754 16:26:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:28.754 16:26:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.754 16:26:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.754 16:26:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:28.754 16:26:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:28.754 16:26:06 -- accel/accel.sh@41 -- # local IFS=, 00:06:28.754 16:26:06 -- accel/accel.sh@42 -- # jq -r . 00:06:28.754 [2024-11-16 16:26:06.079556] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:28.754 [2024-11-16 16:26:06.079638] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70639 ] 00:06:28.754 [2024-11-16 16:26:06.217181] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.014 [2024-11-16 16:26:06.283379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.014 16:26:06 -- accel/accel.sh@21 -- # val= 00:06:29.014 16:26:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.014 16:26:06 -- accel/accel.sh@20 -- # IFS=: 00:06:29.014 16:26:06 -- accel/accel.sh@20 -- # read -r var val 00:06:29.014 16:26:06 -- accel/accel.sh@21 -- # val= 00:06:29.014 16:26:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.014 16:26:06 -- accel/accel.sh@20 -- # IFS=: 00:06:29.014 16:26:06 -- accel/accel.sh@20 -- # read -r var val 00:06:29.014 16:26:06 -- accel/accel.sh@21 -- # val=0x1 00:06:29.014 16:26:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.014 16:26:06 -- accel/accel.sh@20 -- # IFS=: 00:06:29.014 16:26:06 -- accel/accel.sh@20 -- # read -r var val 00:06:29.014 16:26:06 -- accel/accel.sh@21 -- # val= 00:06:29.014 16:26:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.014 16:26:06 -- accel/accel.sh@20 -- # IFS=: 00:06:29.014 16:26:06 -- accel/accel.sh@20 -- # read -r var val 00:06:29.014 16:26:06 -- accel/accel.sh@21 -- # val= 00:06:29.014 16:26:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.014 16:26:06 -- accel/accel.sh@20 -- # IFS=: 00:06:29.014 16:26:06 -- accel/accel.sh@20 -- # read -r var val 00:06:29.014 16:26:06 -- accel/accel.sh@21 -- # val=copy 00:06:29.014 16:26:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.014 16:26:06 -- accel/accel.sh@24 -- # accel_opc=copy 00:06:29.014 16:26:06 -- accel/accel.sh@20 -- # IFS=: 00:06:29.014 16:26:06 -- accel/accel.sh@20 -- # read -r var val 00:06:29.014 16:26:06 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:29.014 16:26:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.014 16:26:06 -- accel/accel.sh@20 -- # IFS=: 00:06:29.014 16:26:06 -- accel/accel.sh@20 -- # read -r var val 00:06:29.014 16:26:06 -- accel/accel.sh@21 -- # val= 00:06:29.014 16:26:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.014 16:26:06 -- accel/accel.sh@20 -- # IFS=: 00:06:29.014 16:26:06 -- accel/accel.sh@20 -- # read -r var val 00:06:29.014 16:26:06 -- accel/accel.sh@21 -- # val=software 00:06:29.014 16:26:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.014 16:26:06 -- accel/accel.sh@23 -- # accel_module=software 00:06:29.014 16:26:06 -- accel/accel.sh@20 -- # IFS=: 00:06:29.014 16:26:06 -- accel/accel.sh@20 -- # read -r var val 00:06:29.014 16:26:06 -- accel/accel.sh@21 -- # val=32 00:06:29.014 16:26:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.014 16:26:06 -- accel/accel.sh@20 -- # IFS=: 00:06:29.014 16:26:06 -- accel/accel.sh@20 -- # read -r var val 00:06:29.014 16:26:06 -- accel/accel.sh@21 -- # val=32 00:06:29.014 16:26:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.014 16:26:06 -- accel/accel.sh@20 -- # IFS=: 00:06:29.014 16:26:06 -- accel/accel.sh@20 -- # read -r var val 00:06:29.014 16:26:06 -- accel/accel.sh@21 -- # val=1 00:06:29.014 16:26:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.014 16:26:06 -- accel/accel.sh@20 -- # IFS=: 00:06:29.014 16:26:06 -- accel/accel.sh@20 -- # read -r var val 00:06:29.014 16:26:06 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:29.014 16:26:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.014 16:26:06 -- accel/accel.sh@20 -- # IFS=: 00:06:29.014 16:26:06 -- accel/accel.sh@20 -- # read -r var val 00:06:29.014 16:26:06 -- accel/accel.sh@21 -- # val=Yes 00:06:29.014 16:26:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.014 16:26:06 -- accel/accel.sh@20 -- # IFS=: 00:06:29.014 16:26:06 -- accel/accel.sh@20 -- # read -r var val 00:06:29.014 16:26:06 -- accel/accel.sh@21 -- # val= 00:06:29.014 16:26:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.014 16:26:06 -- accel/accel.sh@20 -- # IFS=: 00:06:29.014 16:26:06 -- accel/accel.sh@20 -- # read -r var val 00:06:29.014 16:26:06 -- accel/accel.sh@21 -- # val= 00:06:29.014 16:26:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.014 16:26:06 -- accel/accel.sh@20 -- # IFS=: 00:06:29.014 16:26:06 -- accel/accel.sh@20 -- # read -r var val 00:06:30.392 16:26:07 -- accel/accel.sh@21 -- # val= 00:06:30.392 16:26:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.392 16:26:07 -- accel/accel.sh@20 -- # IFS=: 00:06:30.392 16:26:07 -- accel/accel.sh@20 -- # read -r var val 00:06:30.392 16:26:07 -- accel/accel.sh@21 -- # val= 00:06:30.392 16:26:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.392 16:26:07 -- accel/accel.sh@20 -- # IFS=: 00:06:30.392 16:26:07 -- accel/accel.sh@20 -- # read -r var val 00:06:30.392 16:26:07 -- accel/accel.sh@21 -- # val= 00:06:30.392 16:26:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.392 16:26:07 -- accel/accel.sh@20 -- # IFS=: 00:06:30.392 16:26:07 -- accel/accel.sh@20 -- # read -r var val 00:06:30.392 16:26:07 -- accel/accel.sh@21 -- # val= 00:06:30.392 16:26:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.392 16:26:07 -- accel/accel.sh@20 -- # IFS=: 00:06:30.392 16:26:07 -- accel/accel.sh@20 -- # read -r var val 00:06:30.392 16:26:07 -- accel/accel.sh@21 -- # val= 00:06:30.392 16:26:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.392 16:26:07 -- accel/accel.sh@20 -- # IFS=: 00:06:30.392 16:26:07 -- accel/accel.sh@20 -- # read -r var val 00:06:30.392 16:26:07 -- accel/accel.sh@21 -- # val= 00:06:30.392 16:26:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.392 16:26:07 -- accel/accel.sh@20 -- # IFS=: 00:06:30.392 16:26:07 -- accel/accel.sh@20 -- # read -r var val 00:06:30.392 16:26:07 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:30.392 16:26:07 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:06:30.392 16:26:07 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:30.392 00:06:30.392 real 0m2.980s 00:06:30.392 user 0m1.255s 00:06:30.392 sys 0m0.146s 00:06:30.392 16:26:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:30.392 16:26:07 -- common/autotest_common.sh@10 -- # set +x 00:06:30.392 ************************************ 00:06:30.392 END TEST accel_copy 00:06:30.392 ************************************ 00:06:30.392 16:26:07 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:30.392 16:26:07 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:30.392 16:26:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:30.392 16:26:07 -- common/autotest_common.sh@10 -- # set +x 00:06:30.392 ************************************ 00:06:30.393 START TEST accel_fill 00:06:30.393 ************************************ 00:06:30.393 16:26:07 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:30.393 16:26:07 -- accel/accel.sh@16 -- # local accel_opc 00:06:30.393 16:26:07 -- accel/accel.sh@17 -- # local accel_module 00:06:30.393 16:26:07 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:30.393 16:26:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:30.393 16:26:07 -- accel/accel.sh@12 -- # build_accel_config 00:06:30.393 16:26:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:30.393 16:26:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.393 16:26:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.393 16:26:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:30.393 16:26:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:30.393 16:26:07 -- accel/accel.sh@41 -- # local IFS=, 00:06:30.393 16:26:07 -- accel/accel.sh@42 -- # jq -r . 00:06:30.393 [2024-11-16 16:26:07.617578] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:30.393 [2024-11-16 16:26:07.617662] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70668 ] 00:06:30.393 [2024-11-16 16:26:07.753288] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.393 [2024-11-16 16:26:07.816693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.770 16:26:09 -- accel/accel.sh@18 -- # out=' 00:06:31.770 SPDK Configuration: 00:06:31.770 Core mask: 0x1 00:06:31.770 00:06:31.770 Accel Perf Configuration: 00:06:31.770 Workload Type: fill 00:06:31.770 Fill pattern: 0x80 00:06:31.770 Transfer size: 4096 bytes 00:06:31.770 Vector count 1 00:06:31.770 Module: software 00:06:31.770 Queue depth: 64 00:06:31.770 Allocate depth: 64 00:06:31.770 # threads/core: 1 00:06:31.770 Run time: 1 seconds 00:06:31.770 Verify: Yes 00:06:31.770 00:06:31.770 Running for 1 seconds... 00:06:31.770 00:06:31.770 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:31.770 ------------------------------------------------------------------------------------ 00:06:31.770 0,0 572992/s 2238 MiB/s 0 0 00:06:31.770 ==================================================================================== 00:06:31.770 Total 572992/s 2238 MiB/s 0 0' 00:06:31.770 16:26:09 -- accel/accel.sh@20 -- # IFS=: 00:06:31.770 16:26:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:31.770 16:26:09 -- accel/accel.sh@20 -- # read -r var val 00:06:31.770 16:26:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:31.770 16:26:09 -- accel/accel.sh@12 -- # build_accel_config 00:06:31.770 16:26:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:31.770 16:26:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.770 16:26:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.770 16:26:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:31.770 16:26:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:31.770 16:26:09 -- accel/accel.sh@41 -- # local IFS=, 00:06:31.770 16:26:09 -- accel/accel.sh@42 -- # jq -r . 00:06:31.770 [2024-11-16 16:26:09.128168] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:31.770 [2024-11-16 16:26:09.128243] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70695 ] 00:06:32.029 [2024-11-16 16:26:09.265225] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.029 [2024-11-16 16:26:09.326610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.029 16:26:09 -- accel/accel.sh@21 -- # val= 00:06:32.029 16:26:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.029 16:26:09 -- accel/accel.sh@20 -- # IFS=: 00:06:32.029 16:26:09 -- accel/accel.sh@20 -- # read -r var val 00:06:32.029 16:26:09 -- accel/accel.sh@21 -- # val= 00:06:32.029 16:26:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.029 16:26:09 -- accel/accel.sh@20 -- # IFS=: 00:06:32.029 16:26:09 -- accel/accel.sh@20 -- # read -r var val 00:06:32.029 16:26:09 -- accel/accel.sh@21 -- # val=0x1 00:06:32.029 16:26:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.029 16:26:09 -- accel/accel.sh@20 -- # IFS=: 00:06:32.029 16:26:09 -- accel/accel.sh@20 -- # read -r var val 00:06:32.029 16:26:09 -- accel/accel.sh@21 -- # val= 00:06:32.029 16:26:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.029 16:26:09 -- accel/accel.sh@20 -- # IFS=: 00:06:32.029 16:26:09 -- accel/accel.sh@20 -- # read -r var val 00:06:32.029 16:26:09 -- accel/accel.sh@21 -- # val= 00:06:32.029 16:26:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.029 16:26:09 -- accel/accel.sh@20 -- # IFS=: 00:06:32.029 16:26:09 -- accel/accel.sh@20 -- # read -r var val 00:06:32.029 16:26:09 -- accel/accel.sh@21 -- # val=fill 00:06:32.029 16:26:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.029 16:26:09 -- accel/accel.sh@24 -- # accel_opc=fill 00:06:32.029 16:26:09 -- accel/accel.sh@20 -- # IFS=: 00:06:32.029 16:26:09 -- accel/accel.sh@20 -- # read -r var val 00:06:32.029 16:26:09 -- accel/accel.sh@21 -- # val=0x80 00:06:32.029 16:26:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.029 16:26:09 -- accel/accel.sh@20 -- # IFS=: 00:06:32.029 16:26:09 -- accel/accel.sh@20 -- # read -r var val 00:06:32.029 16:26:09 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:32.029 16:26:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.029 16:26:09 -- accel/accel.sh@20 -- # IFS=: 00:06:32.029 16:26:09 -- accel/accel.sh@20 -- # read -r var val 00:06:32.029 16:26:09 -- accel/accel.sh@21 -- # val= 00:06:32.029 16:26:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.029 16:26:09 -- accel/accel.sh@20 -- # IFS=: 00:06:32.029 16:26:09 -- accel/accel.sh@20 -- # read -r var val 00:06:32.029 16:26:09 -- accel/accel.sh@21 -- # val=software 00:06:32.029 16:26:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.029 16:26:09 -- accel/accel.sh@23 -- # accel_module=software 00:06:32.029 16:26:09 -- accel/accel.sh@20 -- # IFS=: 00:06:32.029 16:26:09 -- accel/accel.sh@20 -- # read -r var val 00:06:32.029 16:26:09 -- accel/accel.sh@21 -- # val=64 00:06:32.029 16:26:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.029 16:26:09 -- accel/accel.sh@20 -- # IFS=: 00:06:32.029 16:26:09 -- accel/accel.sh@20 -- # read -r var val 00:06:32.029 16:26:09 -- accel/accel.sh@21 -- # val=64 00:06:32.029 16:26:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.029 16:26:09 -- accel/accel.sh@20 -- # IFS=: 00:06:32.029 16:26:09 -- accel/accel.sh@20 -- # read -r var val 00:06:32.029 16:26:09 -- accel/accel.sh@21 -- # val=1 00:06:32.029 16:26:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.029 16:26:09 -- accel/accel.sh@20 -- # IFS=: 00:06:32.029 16:26:09 -- accel/accel.sh@20 -- # read -r var val 00:06:32.029 16:26:09 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:32.029 16:26:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.029 16:26:09 -- accel/accel.sh@20 -- # IFS=: 00:06:32.029 16:26:09 -- accel/accel.sh@20 -- # read -r var val 00:06:32.029 16:26:09 -- accel/accel.sh@21 -- # val=Yes 00:06:32.029 16:26:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.029 16:26:09 -- accel/accel.sh@20 -- # IFS=: 00:06:32.029 16:26:09 -- accel/accel.sh@20 -- # read -r var val 00:06:32.029 16:26:09 -- accel/accel.sh@21 -- # val= 00:06:32.029 16:26:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.029 16:26:09 -- accel/accel.sh@20 -- # IFS=: 00:06:32.029 16:26:09 -- accel/accel.sh@20 -- # read -r var val 00:06:32.029 16:26:09 -- accel/accel.sh@21 -- # val= 00:06:32.029 16:26:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.029 16:26:09 -- accel/accel.sh@20 -- # IFS=: 00:06:32.029 16:26:09 -- accel/accel.sh@20 -- # read -r var val 00:06:33.407 16:26:10 -- accel/accel.sh@21 -- # val= 00:06:33.407 16:26:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.407 16:26:10 -- accel/accel.sh@20 -- # IFS=: 00:06:33.407 16:26:10 -- accel/accel.sh@20 -- # read -r var val 00:06:33.407 16:26:10 -- accel/accel.sh@21 -- # val= 00:06:33.407 16:26:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.407 16:26:10 -- accel/accel.sh@20 -- # IFS=: 00:06:33.407 16:26:10 -- accel/accel.sh@20 -- # read -r var val 00:06:33.407 16:26:10 -- accel/accel.sh@21 -- # val= 00:06:33.408 16:26:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.408 16:26:10 -- accel/accel.sh@20 -- # IFS=: 00:06:33.408 16:26:10 -- accel/accel.sh@20 -- # read -r var val 00:06:33.408 16:26:10 -- accel/accel.sh@21 -- # val= 00:06:33.408 16:26:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.408 16:26:10 -- accel/accel.sh@20 -- # IFS=: 00:06:33.408 16:26:10 -- accel/accel.sh@20 -- # read -r var val 00:06:33.408 16:26:10 -- accel/accel.sh@21 -- # val= 00:06:33.408 16:26:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.408 16:26:10 -- accel/accel.sh@20 -- # IFS=: 00:06:33.408 16:26:10 -- accel/accel.sh@20 -- # read -r var val 00:06:33.408 16:26:10 -- accel/accel.sh@21 -- # val= 00:06:33.408 16:26:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.408 16:26:10 -- accel/accel.sh@20 -- # IFS=: 00:06:33.408 16:26:10 -- accel/accel.sh@20 -- # read -r var val 00:06:33.408 16:26:10 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:33.408 16:26:10 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:06:33.408 16:26:10 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:33.408 00:06:33.408 real 0m2.990s 00:06:33.408 user 0m2.508s 00:06:33.408 sys 0m0.275s 00:06:33.408 16:26:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:33.408 16:26:10 -- common/autotest_common.sh@10 -- # set +x 00:06:33.408 ************************************ 00:06:33.408 END TEST accel_fill 00:06:33.408 ************************************ 00:06:33.408 16:26:10 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:33.408 16:26:10 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:33.408 16:26:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:33.408 16:26:10 -- common/autotest_common.sh@10 -- # set +x 00:06:33.408 ************************************ 00:06:33.408 START TEST accel_copy_crc32c 00:06:33.408 ************************************ 00:06:33.408 16:26:10 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y 00:06:33.408 16:26:10 -- accel/accel.sh@16 -- # local accel_opc 00:06:33.408 16:26:10 -- accel/accel.sh@17 -- # local accel_module 00:06:33.408 16:26:10 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:33.408 16:26:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:33.408 16:26:10 -- accel/accel.sh@12 -- # build_accel_config 00:06:33.408 16:26:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:33.408 16:26:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.408 16:26:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.408 16:26:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:33.408 16:26:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:33.408 16:26:10 -- accel/accel.sh@41 -- # local IFS=, 00:06:33.408 16:26:10 -- accel/accel.sh@42 -- # jq -r . 00:06:33.408 [2024-11-16 16:26:10.660743] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:33.408 [2024-11-16 16:26:10.660828] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70724 ] 00:06:33.408 [2024-11-16 16:26:10.797774] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.408 [2024-11-16 16:26:10.871836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.785 16:26:12 -- accel/accel.sh@18 -- # out=' 00:06:34.785 SPDK Configuration: 00:06:34.785 Core mask: 0x1 00:06:34.785 00:06:34.785 Accel Perf Configuration: 00:06:34.785 Workload Type: copy_crc32c 00:06:34.785 CRC-32C seed: 0 00:06:34.785 Vector size: 4096 bytes 00:06:34.785 Transfer size: 4096 bytes 00:06:34.785 Vector count 1 00:06:34.785 Module: software 00:06:34.785 Queue depth: 32 00:06:34.785 Allocate depth: 32 00:06:34.785 # threads/core: 1 00:06:34.785 Run time: 1 seconds 00:06:34.785 Verify: Yes 00:06:34.785 00:06:34.785 Running for 1 seconds... 00:06:34.785 00:06:34.785 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:34.785 ------------------------------------------------------------------------------------ 00:06:34.785 0,0 311808/s 1218 MiB/s 0 0 00:06:34.785 ==================================================================================== 00:06:34.785 Total 311808/s 1218 MiB/s 0 0' 00:06:34.785 16:26:12 -- accel/accel.sh@20 -- # IFS=: 00:06:34.785 16:26:12 -- accel/accel.sh@20 -- # read -r var val 00:06:34.785 16:26:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:34.785 16:26:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:34.785 16:26:12 -- accel/accel.sh@12 -- # build_accel_config 00:06:34.785 16:26:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:34.785 16:26:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.785 16:26:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.785 16:26:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:34.785 16:26:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:34.785 16:26:12 -- accel/accel.sh@41 -- # local IFS=, 00:06:34.785 16:26:12 -- accel/accel.sh@42 -- # jq -r . 00:06:34.785 [2024-11-16 16:26:12.184975] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:34.785 [2024-11-16 16:26:12.185073] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70749 ] 00:06:35.044 [2024-11-16 16:26:12.321683] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.044 [2024-11-16 16:26:12.389069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.044 16:26:12 -- accel/accel.sh@21 -- # val= 00:06:35.044 16:26:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.044 16:26:12 -- accel/accel.sh@20 -- # IFS=: 00:06:35.044 16:26:12 -- accel/accel.sh@20 -- # read -r var val 00:06:35.044 16:26:12 -- accel/accel.sh@21 -- # val= 00:06:35.044 16:26:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.044 16:26:12 -- accel/accel.sh@20 -- # IFS=: 00:06:35.044 16:26:12 -- accel/accel.sh@20 -- # read -r var val 00:06:35.044 16:26:12 -- accel/accel.sh@21 -- # val=0x1 00:06:35.044 16:26:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.044 16:26:12 -- accel/accel.sh@20 -- # IFS=: 00:06:35.044 16:26:12 -- accel/accel.sh@20 -- # read -r var val 00:06:35.044 16:26:12 -- accel/accel.sh@21 -- # val= 00:06:35.044 16:26:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.044 16:26:12 -- accel/accel.sh@20 -- # IFS=: 00:06:35.044 16:26:12 -- accel/accel.sh@20 -- # read -r var val 00:06:35.044 16:26:12 -- accel/accel.sh@21 -- # val= 00:06:35.044 16:26:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.044 16:26:12 -- accel/accel.sh@20 -- # IFS=: 00:06:35.044 16:26:12 -- accel/accel.sh@20 -- # read -r var val 00:06:35.044 16:26:12 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:35.044 16:26:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.044 16:26:12 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:35.044 16:26:12 -- accel/accel.sh@20 -- # IFS=: 00:06:35.044 16:26:12 -- accel/accel.sh@20 -- # read -r var val 00:06:35.044 16:26:12 -- accel/accel.sh@21 -- # val=0 00:06:35.044 16:26:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.044 16:26:12 -- accel/accel.sh@20 -- # IFS=: 00:06:35.044 16:26:12 -- accel/accel.sh@20 -- # read -r var val 00:06:35.044 16:26:12 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:35.044 16:26:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.044 16:26:12 -- accel/accel.sh@20 -- # IFS=: 00:06:35.044 16:26:12 -- accel/accel.sh@20 -- # read -r var val 00:06:35.044 16:26:12 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:35.044 16:26:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.044 16:26:12 -- accel/accel.sh@20 -- # IFS=: 00:06:35.044 16:26:12 -- accel/accel.sh@20 -- # read -r var val 00:06:35.044 16:26:12 -- accel/accel.sh@21 -- # val= 00:06:35.044 16:26:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.044 16:26:12 -- accel/accel.sh@20 -- # IFS=: 00:06:35.044 16:26:12 -- accel/accel.sh@20 -- # read -r var val 00:06:35.044 16:26:12 -- accel/accel.sh@21 -- # val=software 00:06:35.044 16:26:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.044 16:26:12 -- accel/accel.sh@23 -- # accel_module=software 00:06:35.044 16:26:12 -- accel/accel.sh@20 -- # IFS=: 00:06:35.044 16:26:12 -- accel/accel.sh@20 -- # read -r var val 00:06:35.044 16:26:12 -- accel/accel.sh@21 -- # val=32 00:06:35.044 16:26:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.044 16:26:12 -- accel/accel.sh@20 -- # IFS=: 00:06:35.044 16:26:12 -- accel/accel.sh@20 -- # read -r var val 00:06:35.044 16:26:12 -- accel/accel.sh@21 -- # val=32 00:06:35.044 16:26:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.044 16:26:12 -- accel/accel.sh@20 -- # IFS=: 00:06:35.044 16:26:12 -- accel/accel.sh@20 -- # read -r var val 00:06:35.044 16:26:12 -- accel/accel.sh@21 -- # val=1 00:06:35.044 16:26:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.044 16:26:12 -- accel/accel.sh@20 -- # IFS=: 00:06:35.044 16:26:12 -- accel/accel.sh@20 -- # read -r var val 00:06:35.045 16:26:12 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:35.045 16:26:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.045 16:26:12 -- accel/accel.sh@20 -- # IFS=: 00:06:35.045 16:26:12 -- accel/accel.sh@20 -- # read -r var val 00:06:35.045 16:26:12 -- accel/accel.sh@21 -- # val=Yes 00:06:35.045 16:26:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.045 16:26:12 -- accel/accel.sh@20 -- # IFS=: 00:06:35.045 16:26:12 -- accel/accel.sh@20 -- # read -r var val 00:06:35.045 16:26:12 -- accel/accel.sh@21 -- # val= 00:06:35.045 16:26:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.045 16:26:12 -- accel/accel.sh@20 -- # IFS=: 00:06:35.045 16:26:12 -- accel/accel.sh@20 -- # read -r var val 00:06:35.045 16:26:12 -- accel/accel.sh@21 -- # val= 00:06:35.045 16:26:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.045 16:26:12 -- accel/accel.sh@20 -- # IFS=: 00:06:35.045 16:26:12 -- accel/accel.sh@20 -- # read -r var val 00:06:36.494 16:26:13 -- accel/accel.sh@21 -- # val= 00:06:36.494 16:26:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.494 16:26:13 -- accel/accel.sh@20 -- # IFS=: 00:06:36.494 16:26:13 -- accel/accel.sh@20 -- # read -r var val 00:06:36.494 16:26:13 -- accel/accel.sh@21 -- # val= 00:06:36.494 16:26:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.494 16:26:13 -- accel/accel.sh@20 -- # IFS=: 00:06:36.494 16:26:13 -- accel/accel.sh@20 -- # read -r var val 00:06:36.494 16:26:13 -- accel/accel.sh@21 -- # val= 00:06:36.494 16:26:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.494 16:26:13 -- accel/accel.sh@20 -- # IFS=: 00:06:36.494 16:26:13 -- accel/accel.sh@20 -- # read -r var val 00:06:36.494 16:26:13 -- accel/accel.sh@21 -- # val= 00:06:36.494 16:26:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.494 16:26:13 -- accel/accel.sh@20 -- # IFS=: 00:06:36.494 16:26:13 -- accel/accel.sh@20 -- # read -r var val 00:06:36.494 16:26:13 -- accel/accel.sh@21 -- # val= 00:06:36.494 16:26:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.494 16:26:13 -- accel/accel.sh@20 -- # IFS=: 00:06:36.494 16:26:13 -- accel/accel.sh@20 -- # read -r var val 00:06:36.494 16:26:13 -- accel/accel.sh@21 -- # val= 00:06:36.494 16:26:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.494 16:26:13 -- accel/accel.sh@20 -- # IFS=: 00:06:36.494 16:26:13 -- accel/accel.sh@20 -- # read -r var val 00:06:36.494 16:26:13 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:36.494 16:26:13 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:36.494 16:26:13 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:36.494 00:06:36.494 real 0m3.041s 00:06:36.494 user 0m2.550s 00:06:36.494 sys 0m0.285s 00:06:36.494 ************************************ 00:06:36.494 END TEST accel_copy_crc32c 00:06:36.494 ************************************ 00:06:36.494 16:26:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:36.494 16:26:13 -- common/autotest_common.sh@10 -- # set +x 00:06:36.494 16:26:13 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:36.494 16:26:13 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:36.494 16:26:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:36.494 16:26:13 -- common/autotest_common.sh@10 -- # set +x 00:06:36.494 ************************************ 00:06:36.495 START TEST accel_copy_crc32c_C2 00:06:36.495 ************************************ 00:06:36.495 16:26:13 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:36.495 16:26:13 -- accel/accel.sh@16 -- # local accel_opc 00:06:36.495 16:26:13 -- accel/accel.sh@17 -- # local accel_module 00:06:36.495 16:26:13 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:36.495 16:26:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:36.495 16:26:13 -- accel/accel.sh@12 -- # build_accel_config 00:06:36.495 16:26:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:36.495 16:26:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.495 16:26:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.495 16:26:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:36.495 16:26:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:36.495 16:26:13 -- accel/accel.sh@41 -- # local IFS=, 00:06:36.495 16:26:13 -- accel/accel.sh@42 -- # jq -r . 00:06:36.495 [2024-11-16 16:26:13.762969] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:36.495 [2024-11-16 16:26:13.763072] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70778 ] 00:06:36.495 [2024-11-16 16:26:13.902012] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.495 [2024-11-16 16:26:13.974365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.872 16:26:15 -- accel/accel.sh@18 -- # out=' 00:06:37.872 SPDK Configuration: 00:06:37.872 Core mask: 0x1 00:06:37.872 00:06:37.872 Accel Perf Configuration: 00:06:37.872 Workload Type: copy_crc32c 00:06:37.872 CRC-32C seed: 0 00:06:37.872 Vector size: 4096 bytes 00:06:37.872 Transfer size: 8192 bytes 00:06:37.872 Vector count 2 00:06:37.872 Module: software 00:06:37.872 Queue depth: 32 00:06:37.872 Allocate depth: 32 00:06:37.872 # threads/core: 1 00:06:37.872 Run time: 1 seconds 00:06:37.872 Verify: Yes 00:06:37.872 00:06:37.872 Running for 1 seconds... 00:06:37.872 00:06:37.872 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:37.872 ------------------------------------------------------------------------------------ 00:06:37.872 0,0 211936/s 1655 MiB/s 0 0 00:06:37.872 ==================================================================================== 00:06:37.872 Total 211936/s 827 MiB/s 0 0' 00:06:37.872 16:26:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:37.872 16:26:15 -- accel/accel.sh@20 -- # IFS=: 00:06:37.872 16:26:15 -- accel/accel.sh@20 -- # read -r var val 00:06:37.872 16:26:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:37.872 16:26:15 -- accel/accel.sh@12 -- # build_accel_config 00:06:37.872 16:26:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:37.872 16:26:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.872 16:26:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.872 16:26:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:37.872 16:26:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:37.872 16:26:15 -- accel/accel.sh@41 -- # local IFS=, 00:06:37.872 16:26:15 -- accel/accel.sh@42 -- # jq -r . 00:06:37.872 [2024-11-16 16:26:15.193588] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:37.873 [2024-11-16 16:26:15.193688] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70798 ] 00:06:37.873 [2024-11-16 16:26:15.332180] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.132 [2024-11-16 16:26:15.392375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.132 16:26:15 -- accel/accel.sh@21 -- # val= 00:06:38.132 16:26:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.132 16:26:15 -- accel/accel.sh@20 -- # IFS=: 00:06:38.132 16:26:15 -- accel/accel.sh@20 -- # read -r var val 00:06:38.132 16:26:15 -- accel/accel.sh@21 -- # val= 00:06:38.132 16:26:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.132 16:26:15 -- accel/accel.sh@20 -- # IFS=: 00:06:38.132 16:26:15 -- accel/accel.sh@20 -- # read -r var val 00:06:38.132 16:26:15 -- accel/accel.sh@21 -- # val=0x1 00:06:38.132 16:26:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.132 16:26:15 -- accel/accel.sh@20 -- # IFS=: 00:06:38.132 16:26:15 -- accel/accel.sh@20 -- # read -r var val 00:06:38.132 16:26:15 -- accel/accel.sh@21 -- # val= 00:06:38.132 16:26:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.132 16:26:15 -- accel/accel.sh@20 -- # IFS=: 00:06:38.132 16:26:15 -- accel/accel.sh@20 -- # read -r var val 00:06:38.132 16:26:15 -- accel/accel.sh@21 -- # val= 00:06:38.132 16:26:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.132 16:26:15 -- accel/accel.sh@20 -- # IFS=: 00:06:38.132 16:26:15 -- accel/accel.sh@20 -- # read -r var val 00:06:38.132 16:26:15 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:38.132 16:26:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.132 16:26:15 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:38.132 16:26:15 -- accel/accel.sh@20 -- # IFS=: 00:06:38.132 16:26:15 -- accel/accel.sh@20 -- # read -r var val 00:06:38.132 16:26:15 -- accel/accel.sh@21 -- # val=0 00:06:38.132 16:26:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.132 16:26:15 -- accel/accel.sh@20 -- # IFS=: 00:06:38.132 16:26:15 -- accel/accel.sh@20 -- # read -r var val 00:06:38.132 16:26:15 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:38.132 16:26:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.132 16:26:15 -- accel/accel.sh@20 -- # IFS=: 00:06:38.132 16:26:15 -- accel/accel.sh@20 -- # read -r var val 00:06:38.132 16:26:15 -- accel/accel.sh@21 -- # val='8192 bytes' 00:06:38.132 16:26:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.132 16:26:15 -- accel/accel.sh@20 -- # IFS=: 00:06:38.132 16:26:15 -- accel/accel.sh@20 -- # read -r var val 00:06:38.132 16:26:15 -- accel/accel.sh@21 -- # val= 00:06:38.132 16:26:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.132 16:26:15 -- accel/accel.sh@20 -- # IFS=: 00:06:38.132 16:26:15 -- accel/accel.sh@20 -- # read -r var val 00:06:38.132 16:26:15 -- accel/accel.sh@21 -- # val=software 00:06:38.132 16:26:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.132 16:26:15 -- accel/accel.sh@23 -- # accel_module=software 00:06:38.132 16:26:15 -- accel/accel.sh@20 -- # IFS=: 00:06:38.132 16:26:15 -- accel/accel.sh@20 -- # read -r var val 00:06:38.132 16:26:15 -- accel/accel.sh@21 -- # val=32 00:06:38.132 16:26:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.132 16:26:15 -- accel/accel.sh@20 -- # IFS=: 00:06:38.132 16:26:15 -- accel/accel.sh@20 -- # read -r var val 00:06:38.132 16:26:15 -- accel/accel.sh@21 -- # val=32 00:06:38.132 16:26:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.132 16:26:15 -- accel/accel.sh@20 -- # IFS=: 00:06:38.132 16:26:15 -- accel/accel.sh@20 -- # read -r var val 00:06:38.132 16:26:15 -- accel/accel.sh@21 -- # val=1 00:06:38.132 16:26:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.132 16:26:15 -- accel/accel.sh@20 -- # IFS=: 00:06:38.132 16:26:15 -- accel/accel.sh@20 -- # read -r var val 00:06:38.132 16:26:15 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:38.132 16:26:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.132 16:26:15 -- accel/accel.sh@20 -- # IFS=: 00:06:38.132 16:26:15 -- accel/accel.sh@20 -- # read -r var val 00:06:38.132 16:26:15 -- accel/accel.sh@21 -- # val=Yes 00:06:38.132 16:26:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.132 16:26:15 -- accel/accel.sh@20 -- # IFS=: 00:06:38.132 16:26:15 -- accel/accel.sh@20 -- # read -r var val 00:06:38.132 16:26:15 -- accel/accel.sh@21 -- # val= 00:06:38.132 16:26:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.132 16:26:15 -- accel/accel.sh@20 -- # IFS=: 00:06:38.132 16:26:15 -- accel/accel.sh@20 -- # read -r var val 00:06:38.132 16:26:15 -- accel/accel.sh@21 -- # val= 00:06:38.132 16:26:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.132 16:26:15 -- accel/accel.sh@20 -- # IFS=: 00:06:38.132 16:26:15 -- accel/accel.sh@20 -- # read -r var val 00:06:39.509 16:26:16 -- accel/accel.sh@21 -- # val= 00:06:39.509 16:26:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.509 16:26:16 -- accel/accel.sh@20 -- # IFS=: 00:06:39.510 16:26:16 -- accel/accel.sh@20 -- # read -r var val 00:06:39.510 16:26:16 -- accel/accel.sh@21 -- # val= 00:06:39.510 16:26:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.510 16:26:16 -- accel/accel.sh@20 -- # IFS=: 00:06:39.510 16:26:16 -- accel/accel.sh@20 -- # read -r var val 00:06:39.510 16:26:16 -- accel/accel.sh@21 -- # val= 00:06:39.510 16:26:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.510 16:26:16 -- accel/accel.sh@20 -- # IFS=: 00:06:39.510 16:26:16 -- accel/accel.sh@20 -- # read -r var val 00:06:39.510 16:26:16 -- accel/accel.sh@21 -- # val= 00:06:39.510 16:26:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.510 16:26:16 -- accel/accel.sh@20 -- # IFS=: 00:06:39.510 16:26:16 -- accel/accel.sh@20 -- # read -r var val 00:06:39.510 ************************************ 00:06:39.510 END TEST accel_copy_crc32c_C2 00:06:39.510 ************************************ 00:06:39.510 16:26:16 -- accel/accel.sh@21 -- # val= 00:06:39.510 16:26:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.510 16:26:16 -- accel/accel.sh@20 -- # IFS=: 00:06:39.510 16:26:16 -- accel/accel.sh@20 -- # read -r var val 00:06:39.510 16:26:16 -- accel/accel.sh@21 -- # val= 00:06:39.510 16:26:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.510 16:26:16 -- accel/accel.sh@20 -- # IFS=: 00:06:39.510 16:26:16 -- accel/accel.sh@20 -- # read -r var val 00:06:39.510 16:26:16 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:39.510 16:26:16 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:39.510 16:26:16 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:39.510 00:06:39.510 real 0m2.842s 00:06:39.510 user 0m2.396s 00:06:39.510 sys 0m0.245s 00:06:39.510 16:26:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:39.510 16:26:16 -- common/autotest_common.sh@10 -- # set +x 00:06:39.510 16:26:16 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:39.510 16:26:16 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:39.510 16:26:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:39.510 16:26:16 -- common/autotest_common.sh@10 -- # set +x 00:06:39.510 ************************************ 00:06:39.510 START TEST accel_dualcast 00:06:39.510 ************************************ 00:06:39.510 16:26:16 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y 00:06:39.510 16:26:16 -- accel/accel.sh@16 -- # local accel_opc 00:06:39.510 16:26:16 -- accel/accel.sh@17 -- # local accel_module 00:06:39.510 16:26:16 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:06:39.510 16:26:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:39.510 16:26:16 -- accel/accel.sh@12 -- # build_accel_config 00:06:39.510 16:26:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:39.510 16:26:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.510 16:26:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.510 16:26:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:39.510 16:26:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:39.510 16:26:16 -- accel/accel.sh@41 -- # local IFS=, 00:06:39.510 16:26:16 -- accel/accel.sh@42 -- # jq -r . 00:06:39.510 [2024-11-16 16:26:16.657442] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:39.510 [2024-11-16 16:26:16.657542] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70832 ] 00:06:39.510 [2024-11-16 16:26:16.794008] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.510 [2024-11-16 16:26:16.861868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.887 16:26:18 -- accel/accel.sh@18 -- # out=' 00:06:40.887 SPDK Configuration: 00:06:40.887 Core mask: 0x1 00:06:40.887 00:06:40.887 Accel Perf Configuration: 00:06:40.887 Workload Type: dualcast 00:06:40.887 Transfer size: 4096 bytes 00:06:40.887 Vector count 1 00:06:40.887 Module: software 00:06:40.887 Queue depth: 32 00:06:40.887 Allocate depth: 32 00:06:40.887 # threads/core: 1 00:06:40.887 Run time: 1 seconds 00:06:40.887 Verify: Yes 00:06:40.887 00:06:40.887 Running for 1 seconds... 00:06:40.887 00:06:40.887 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:40.887 ------------------------------------------------------------------------------------ 00:06:40.887 0,0 420736/s 1643 MiB/s 0 0 00:06:40.887 ==================================================================================== 00:06:40.887 Total 420736/s 1643 MiB/s 0 0' 00:06:40.887 16:26:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:40.887 16:26:18 -- accel/accel.sh@20 -- # IFS=: 00:06:40.887 16:26:18 -- accel/accel.sh@20 -- # read -r var val 00:06:40.887 16:26:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:40.887 16:26:18 -- accel/accel.sh@12 -- # build_accel_config 00:06:40.887 16:26:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:40.887 16:26:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.887 16:26:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.887 16:26:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:40.887 16:26:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:40.887 16:26:18 -- accel/accel.sh@41 -- # local IFS=, 00:06:40.887 16:26:18 -- accel/accel.sh@42 -- # jq -r . 00:06:40.887 [2024-11-16 16:26:18.061205] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:40.888 [2024-11-16 16:26:18.061310] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70853 ] 00:06:40.888 [2024-11-16 16:26:18.184232] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.888 [2024-11-16 16:26:18.234018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.888 16:26:18 -- accel/accel.sh@21 -- # val= 00:06:40.888 16:26:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.888 16:26:18 -- accel/accel.sh@20 -- # IFS=: 00:06:40.888 16:26:18 -- accel/accel.sh@20 -- # read -r var val 00:06:40.888 16:26:18 -- accel/accel.sh@21 -- # val= 00:06:40.888 16:26:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.888 16:26:18 -- accel/accel.sh@20 -- # IFS=: 00:06:40.888 16:26:18 -- accel/accel.sh@20 -- # read -r var val 00:06:40.888 16:26:18 -- accel/accel.sh@21 -- # val=0x1 00:06:40.888 16:26:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.888 16:26:18 -- accel/accel.sh@20 -- # IFS=: 00:06:40.888 16:26:18 -- accel/accel.sh@20 -- # read -r var val 00:06:40.888 16:26:18 -- accel/accel.sh@21 -- # val= 00:06:40.888 16:26:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.888 16:26:18 -- accel/accel.sh@20 -- # IFS=: 00:06:40.888 16:26:18 -- accel/accel.sh@20 -- # read -r var val 00:06:40.888 16:26:18 -- accel/accel.sh@21 -- # val= 00:06:40.888 16:26:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.888 16:26:18 -- accel/accel.sh@20 -- # IFS=: 00:06:40.888 16:26:18 -- accel/accel.sh@20 -- # read -r var val 00:06:40.888 16:26:18 -- accel/accel.sh@21 -- # val=dualcast 00:06:40.888 16:26:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.888 16:26:18 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:06:40.888 16:26:18 -- accel/accel.sh@20 -- # IFS=: 00:06:40.888 16:26:18 -- accel/accel.sh@20 -- # read -r var val 00:06:40.888 16:26:18 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:40.888 16:26:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.888 16:26:18 -- accel/accel.sh@20 -- # IFS=: 00:06:40.888 16:26:18 -- accel/accel.sh@20 -- # read -r var val 00:06:40.888 16:26:18 -- accel/accel.sh@21 -- # val= 00:06:40.888 16:26:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.888 16:26:18 -- accel/accel.sh@20 -- # IFS=: 00:06:40.888 16:26:18 -- accel/accel.sh@20 -- # read -r var val 00:06:40.888 16:26:18 -- accel/accel.sh@21 -- # val=software 00:06:40.888 16:26:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.888 16:26:18 -- accel/accel.sh@23 -- # accel_module=software 00:06:40.888 16:26:18 -- accel/accel.sh@20 -- # IFS=: 00:06:40.888 16:26:18 -- accel/accel.sh@20 -- # read -r var val 00:06:40.888 16:26:18 -- accel/accel.sh@21 -- # val=32 00:06:40.888 16:26:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.888 16:26:18 -- accel/accel.sh@20 -- # IFS=: 00:06:40.888 16:26:18 -- accel/accel.sh@20 -- # read -r var val 00:06:40.888 16:26:18 -- accel/accel.sh@21 -- # val=32 00:06:40.888 16:26:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.888 16:26:18 -- accel/accel.sh@20 -- # IFS=: 00:06:40.888 16:26:18 -- accel/accel.sh@20 -- # read -r var val 00:06:40.888 16:26:18 -- accel/accel.sh@21 -- # val=1 00:06:40.888 16:26:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.888 16:26:18 -- accel/accel.sh@20 -- # IFS=: 00:06:40.888 16:26:18 -- accel/accel.sh@20 -- # read -r var val 00:06:40.888 16:26:18 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:40.888 16:26:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.888 16:26:18 -- accel/accel.sh@20 -- # IFS=: 00:06:40.888 16:26:18 -- accel/accel.sh@20 -- # read -r var val 00:06:40.888 16:26:18 -- accel/accel.sh@21 -- # val=Yes 00:06:40.888 16:26:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.888 16:26:18 -- accel/accel.sh@20 -- # IFS=: 00:06:40.888 16:26:18 -- accel/accel.sh@20 -- # read -r var val 00:06:40.888 16:26:18 -- accel/accel.sh@21 -- # val= 00:06:40.888 16:26:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.888 16:26:18 -- accel/accel.sh@20 -- # IFS=: 00:06:40.888 16:26:18 -- accel/accel.sh@20 -- # read -r var val 00:06:40.888 16:26:18 -- accel/accel.sh@21 -- # val= 00:06:40.888 16:26:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.888 16:26:18 -- accel/accel.sh@20 -- # IFS=: 00:06:40.888 16:26:18 -- accel/accel.sh@20 -- # read -r var val 00:06:42.278 16:26:19 -- accel/accel.sh@21 -- # val= 00:06:42.278 16:26:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.278 16:26:19 -- accel/accel.sh@20 -- # IFS=: 00:06:42.278 16:26:19 -- accel/accel.sh@20 -- # read -r var val 00:06:42.278 16:26:19 -- accel/accel.sh@21 -- # val= 00:06:42.278 16:26:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.278 16:26:19 -- accel/accel.sh@20 -- # IFS=: 00:06:42.278 16:26:19 -- accel/accel.sh@20 -- # read -r var val 00:06:42.278 16:26:19 -- accel/accel.sh@21 -- # val= 00:06:42.278 16:26:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.278 16:26:19 -- accel/accel.sh@20 -- # IFS=: 00:06:42.278 16:26:19 -- accel/accel.sh@20 -- # read -r var val 00:06:42.278 16:26:19 -- accel/accel.sh@21 -- # val= 00:06:42.278 16:26:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.278 16:26:19 -- accel/accel.sh@20 -- # IFS=: 00:06:42.278 16:26:19 -- accel/accel.sh@20 -- # read -r var val 00:06:42.278 16:26:19 -- accel/accel.sh@21 -- # val= 00:06:42.278 16:26:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.278 16:26:19 -- accel/accel.sh@20 -- # IFS=: 00:06:42.278 16:26:19 -- accel/accel.sh@20 -- # read -r var val 00:06:42.278 16:26:19 -- accel/accel.sh@21 -- # val= 00:06:42.278 16:26:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.278 16:26:19 -- accel/accel.sh@20 -- # IFS=: 00:06:42.278 16:26:19 -- accel/accel.sh@20 -- # read -r var val 00:06:42.278 16:26:19 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:42.278 16:26:19 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:06:42.278 16:26:19 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:42.278 00:06:42.278 real 0m2.788s 00:06:42.278 user 0m2.372s 00:06:42.278 sys 0m0.217s 00:06:42.278 16:26:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:42.278 16:26:19 -- common/autotest_common.sh@10 -- # set +x 00:06:42.278 ************************************ 00:06:42.278 END TEST accel_dualcast 00:06:42.278 ************************************ 00:06:42.278 16:26:19 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:42.278 16:26:19 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:42.278 16:26:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:42.278 16:26:19 -- common/autotest_common.sh@10 -- # set +x 00:06:42.278 ************************************ 00:06:42.278 START TEST accel_compare 00:06:42.278 ************************************ 00:06:42.278 16:26:19 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y 00:06:42.278 16:26:19 -- accel/accel.sh@16 -- # local accel_opc 00:06:42.278 16:26:19 -- accel/accel.sh@17 -- # local accel_module 00:06:42.278 16:26:19 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:06:42.278 16:26:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:42.278 16:26:19 -- accel/accel.sh@12 -- # build_accel_config 00:06:42.278 16:26:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:42.278 16:26:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.278 16:26:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.278 16:26:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:42.278 16:26:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:42.278 16:26:19 -- accel/accel.sh@41 -- # local IFS=, 00:06:42.278 16:26:19 -- accel/accel.sh@42 -- # jq -r . 00:06:42.278 [2024-11-16 16:26:19.495869] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:42.278 [2024-11-16 16:26:19.495976] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70886 ] 00:06:42.278 [2024-11-16 16:26:19.631635] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.278 [2024-11-16 16:26:19.683211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.656 16:26:20 -- accel/accel.sh@18 -- # out=' 00:06:43.656 SPDK Configuration: 00:06:43.656 Core mask: 0x1 00:06:43.656 00:06:43.656 Accel Perf Configuration: 00:06:43.656 Workload Type: compare 00:06:43.656 Transfer size: 4096 bytes 00:06:43.656 Vector count 1 00:06:43.656 Module: software 00:06:43.656 Queue depth: 32 00:06:43.656 Allocate depth: 32 00:06:43.656 # threads/core: 1 00:06:43.656 Run time: 1 seconds 00:06:43.656 Verify: Yes 00:06:43.656 00:06:43.656 Running for 1 seconds... 00:06:43.656 00:06:43.656 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:43.656 ------------------------------------------------------------------------------------ 00:06:43.656 0,0 549984/s 2148 MiB/s 0 0 00:06:43.656 ==================================================================================== 00:06:43.656 Total 549984/s 2148 MiB/s 0 0' 00:06:43.656 16:26:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:43.656 16:26:20 -- accel/accel.sh@20 -- # IFS=: 00:06:43.656 16:26:20 -- accel/accel.sh@20 -- # read -r var val 00:06:43.656 16:26:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:43.656 16:26:20 -- accel/accel.sh@12 -- # build_accel_config 00:06:43.656 16:26:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:43.656 16:26:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.656 16:26:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.656 16:26:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:43.656 16:26:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:43.656 16:26:20 -- accel/accel.sh@41 -- # local IFS=, 00:06:43.656 16:26:20 -- accel/accel.sh@42 -- # jq -r . 00:06:43.656 [2024-11-16 16:26:20.878451] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:43.656 [2024-11-16 16:26:20.878557] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70901 ] 00:06:43.656 [2024-11-16 16:26:21.008927] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.656 [2024-11-16 16:26:21.062984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.656 16:26:21 -- accel/accel.sh@21 -- # val= 00:06:43.656 16:26:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.656 16:26:21 -- accel/accel.sh@20 -- # IFS=: 00:06:43.656 16:26:21 -- accel/accel.sh@20 -- # read -r var val 00:06:43.656 16:26:21 -- accel/accel.sh@21 -- # val= 00:06:43.656 16:26:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.657 16:26:21 -- accel/accel.sh@20 -- # IFS=: 00:06:43.657 16:26:21 -- accel/accel.sh@20 -- # read -r var val 00:06:43.657 16:26:21 -- accel/accel.sh@21 -- # val=0x1 00:06:43.657 16:26:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.657 16:26:21 -- accel/accel.sh@20 -- # IFS=: 00:06:43.657 16:26:21 -- accel/accel.sh@20 -- # read -r var val 00:06:43.657 16:26:21 -- accel/accel.sh@21 -- # val= 00:06:43.657 16:26:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.657 16:26:21 -- accel/accel.sh@20 -- # IFS=: 00:06:43.657 16:26:21 -- accel/accel.sh@20 -- # read -r var val 00:06:43.657 16:26:21 -- accel/accel.sh@21 -- # val= 00:06:43.657 16:26:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.657 16:26:21 -- accel/accel.sh@20 -- # IFS=: 00:06:43.657 16:26:21 -- accel/accel.sh@20 -- # read -r var val 00:06:43.657 16:26:21 -- accel/accel.sh@21 -- # val=compare 00:06:43.657 16:26:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.657 16:26:21 -- accel/accel.sh@24 -- # accel_opc=compare 00:06:43.657 16:26:21 -- accel/accel.sh@20 -- # IFS=: 00:06:43.657 16:26:21 -- accel/accel.sh@20 -- # read -r var val 00:06:43.657 16:26:21 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:43.657 16:26:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.657 16:26:21 -- accel/accel.sh@20 -- # IFS=: 00:06:43.657 16:26:21 -- accel/accel.sh@20 -- # read -r var val 00:06:43.657 16:26:21 -- accel/accel.sh@21 -- # val= 00:06:43.657 16:26:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.657 16:26:21 -- accel/accel.sh@20 -- # IFS=: 00:06:43.657 16:26:21 -- accel/accel.sh@20 -- # read -r var val 00:06:43.657 16:26:21 -- accel/accel.sh@21 -- # val=software 00:06:43.657 16:26:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.657 16:26:21 -- accel/accel.sh@23 -- # accel_module=software 00:06:43.657 16:26:21 -- accel/accel.sh@20 -- # IFS=: 00:06:43.657 16:26:21 -- accel/accel.sh@20 -- # read -r var val 00:06:43.657 16:26:21 -- accel/accel.sh@21 -- # val=32 00:06:43.657 16:26:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.657 16:26:21 -- accel/accel.sh@20 -- # IFS=: 00:06:43.657 16:26:21 -- accel/accel.sh@20 -- # read -r var val 00:06:43.657 16:26:21 -- accel/accel.sh@21 -- # val=32 00:06:43.657 16:26:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.657 16:26:21 -- accel/accel.sh@20 -- # IFS=: 00:06:43.657 16:26:21 -- accel/accel.sh@20 -- # read -r var val 00:06:43.657 16:26:21 -- accel/accel.sh@21 -- # val=1 00:06:43.657 16:26:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.657 16:26:21 -- accel/accel.sh@20 -- # IFS=: 00:06:43.657 16:26:21 -- accel/accel.sh@20 -- # read -r var val 00:06:43.657 16:26:21 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:43.657 16:26:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.657 16:26:21 -- accel/accel.sh@20 -- # IFS=: 00:06:43.657 16:26:21 -- accel/accel.sh@20 -- # read -r var val 00:06:43.657 16:26:21 -- accel/accel.sh@21 -- # val=Yes 00:06:43.657 16:26:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.657 16:26:21 -- accel/accel.sh@20 -- # IFS=: 00:06:43.657 16:26:21 -- accel/accel.sh@20 -- # read -r var val 00:06:43.657 16:26:21 -- accel/accel.sh@21 -- # val= 00:06:43.657 16:26:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.657 16:26:21 -- accel/accel.sh@20 -- # IFS=: 00:06:43.657 16:26:21 -- accel/accel.sh@20 -- # read -r var val 00:06:43.657 16:26:21 -- accel/accel.sh@21 -- # val= 00:06:43.657 16:26:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.657 16:26:21 -- accel/accel.sh@20 -- # IFS=: 00:06:43.657 16:26:21 -- accel/accel.sh@20 -- # read -r var val 00:06:45.035 16:26:22 -- accel/accel.sh@21 -- # val= 00:06:45.035 16:26:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.035 16:26:22 -- accel/accel.sh@20 -- # IFS=: 00:06:45.035 16:26:22 -- accel/accel.sh@20 -- # read -r var val 00:06:45.035 16:26:22 -- accel/accel.sh@21 -- # val= 00:06:45.035 16:26:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.035 16:26:22 -- accel/accel.sh@20 -- # IFS=: 00:06:45.035 16:26:22 -- accel/accel.sh@20 -- # read -r var val 00:06:45.035 16:26:22 -- accel/accel.sh@21 -- # val= 00:06:45.035 16:26:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.035 16:26:22 -- accel/accel.sh@20 -- # IFS=: 00:06:45.035 16:26:22 -- accel/accel.sh@20 -- # read -r var val 00:06:45.035 16:26:22 -- accel/accel.sh@21 -- # val= 00:06:45.035 16:26:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.035 16:26:22 -- accel/accel.sh@20 -- # IFS=: 00:06:45.035 16:26:22 -- accel/accel.sh@20 -- # read -r var val 00:06:45.035 16:26:22 -- accel/accel.sh@21 -- # val= 00:06:45.035 16:26:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.035 16:26:22 -- accel/accel.sh@20 -- # IFS=: 00:06:45.035 16:26:22 -- accel/accel.sh@20 -- # read -r var val 00:06:45.035 16:26:22 -- accel/accel.sh@21 -- # val= 00:06:45.035 16:26:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.035 16:26:22 -- accel/accel.sh@20 -- # IFS=: 00:06:45.035 16:26:22 -- accel/accel.sh@20 -- # read -r var val 00:06:45.035 16:26:22 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:45.035 16:26:22 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:06:45.035 16:26:22 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:45.035 00:06:45.035 real 0m2.794s 00:06:45.035 user 0m2.393s 00:06:45.035 sys 0m0.204s 00:06:45.035 16:26:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:45.035 16:26:22 -- common/autotest_common.sh@10 -- # set +x 00:06:45.035 ************************************ 00:06:45.035 END TEST accel_compare 00:06:45.035 ************************************ 00:06:45.035 16:26:22 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:45.035 16:26:22 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:45.035 16:26:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:45.035 16:26:22 -- common/autotest_common.sh@10 -- # set +x 00:06:45.035 ************************************ 00:06:45.035 START TEST accel_xor 00:06:45.035 ************************************ 00:06:45.035 16:26:22 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y 00:06:45.035 16:26:22 -- accel/accel.sh@16 -- # local accel_opc 00:06:45.035 16:26:22 -- accel/accel.sh@17 -- # local accel_module 00:06:45.035 16:26:22 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:06:45.035 16:26:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:45.035 16:26:22 -- accel/accel.sh@12 -- # build_accel_config 00:06:45.035 16:26:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:45.035 16:26:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.035 16:26:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.035 16:26:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:45.035 16:26:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:45.035 16:26:22 -- accel/accel.sh@41 -- # local IFS=, 00:06:45.035 16:26:22 -- accel/accel.sh@42 -- # jq -r . 00:06:45.035 [2024-11-16 16:26:22.341227] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:45.035 [2024-11-16 16:26:22.341325] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70936 ] 00:06:45.035 [2024-11-16 16:26:22.480707] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.294 [2024-11-16 16:26:22.541570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.673 16:26:23 -- accel/accel.sh@18 -- # out=' 00:06:46.673 SPDK Configuration: 00:06:46.673 Core mask: 0x1 00:06:46.673 00:06:46.673 Accel Perf Configuration: 00:06:46.673 Workload Type: xor 00:06:46.673 Source buffers: 2 00:06:46.673 Transfer size: 4096 bytes 00:06:46.673 Vector count 1 00:06:46.673 Module: software 00:06:46.673 Queue depth: 32 00:06:46.673 Allocate depth: 32 00:06:46.673 # threads/core: 1 00:06:46.673 Run time: 1 seconds 00:06:46.673 Verify: Yes 00:06:46.673 00:06:46.673 Running for 1 seconds... 00:06:46.673 00:06:46.673 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:46.673 ------------------------------------------------------------------------------------ 00:06:46.673 0,0 291680/s 1139 MiB/s 0 0 00:06:46.673 ==================================================================================== 00:06:46.673 Total 291680/s 1139 MiB/s 0 0' 00:06:46.673 16:26:23 -- accel/accel.sh@20 -- # IFS=: 00:06:46.673 16:26:23 -- accel/accel.sh@20 -- # read -r var val 00:06:46.673 16:26:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:46.673 16:26:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:46.673 16:26:23 -- accel/accel.sh@12 -- # build_accel_config 00:06:46.673 16:26:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:46.673 16:26:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.673 16:26:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.673 16:26:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:46.673 16:26:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:46.673 16:26:23 -- accel/accel.sh@41 -- # local IFS=, 00:06:46.673 16:26:23 -- accel/accel.sh@42 -- # jq -r . 00:06:46.673 [2024-11-16 16:26:23.772093] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:46.673 [2024-11-16 16:26:23.772198] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70955 ] 00:06:46.673 [2024-11-16 16:26:23.908125] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.673 [2024-11-16 16:26:23.967945] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.673 16:26:24 -- accel/accel.sh@21 -- # val= 00:06:46.673 16:26:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.673 16:26:24 -- accel/accel.sh@20 -- # IFS=: 00:06:46.673 16:26:24 -- accel/accel.sh@20 -- # read -r var val 00:06:46.673 16:26:24 -- accel/accel.sh@21 -- # val= 00:06:46.673 16:26:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.673 16:26:24 -- accel/accel.sh@20 -- # IFS=: 00:06:46.673 16:26:24 -- accel/accel.sh@20 -- # read -r var val 00:06:46.673 16:26:24 -- accel/accel.sh@21 -- # val=0x1 00:06:46.673 16:26:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.673 16:26:24 -- accel/accel.sh@20 -- # IFS=: 00:06:46.673 16:26:24 -- accel/accel.sh@20 -- # read -r var val 00:06:46.673 16:26:24 -- accel/accel.sh@21 -- # val= 00:06:46.673 16:26:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.673 16:26:24 -- accel/accel.sh@20 -- # IFS=: 00:06:46.673 16:26:24 -- accel/accel.sh@20 -- # read -r var val 00:06:46.673 16:26:24 -- accel/accel.sh@21 -- # val= 00:06:46.673 16:26:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.673 16:26:24 -- accel/accel.sh@20 -- # IFS=: 00:06:46.673 16:26:24 -- accel/accel.sh@20 -- # read -r var val 00:06:46.673 16:26:24 -- accel/accel.sh@21 -- # val=xor 00:06:46.673 16:26:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.673 16:26:24 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:46.673 16:26:24 -- accel/accel.sh@20 -- # IFS=: 00:06:46.673 16:26:24 -- accel/accel.sh@20 -- # read -r var val 00:06:46.673 16:26:24 -- accel/accel.sh@21 -- # val=2 00:06:46.673 16:26:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.673 16:26:24 -- accel/accel.sh@20 -- # IFS=: 00:06:46.673 16:26:24 -- accel/accel.sh@20 -- # read -r var val 00:06:46.673 16:26:24 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:46.673 16:26:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.673 16:26:24 -- accel/accel.sh@20 -- # IFS=: 00:06:46.673 16:26:24 -- accel/accel.sh@20 -- # read -r var val 00:06:46.673 16:26:24 -- accel/accel.sh@21 -- # val= 00:06:46.673 16:26:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.673 16:26:24 -- accel/accel.sh@20 -- # IFS=: 00:06:46.673 16:26:24 -- accel/accel.sh@20 -- # read -r var val 00:06:46.673 16:26:24 -- accel/accel.sh@21 -- # val=software 00:06:46.673 16:26:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.673 16:26:24 -- accel/accel.sh@23 -- # accel_module=software 00:06:46.673 16:26:24 -- accel/accel.sh@20 -- # IFS=: 00:06:46.673 16:26:24 -- accel/accel.sh@20 -- # read -r var val 00:06:46.673 16:26:24 -- accel/accel.sh@21 -- # val=32 00:06:46.673 16:26:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.673 16:26:24 -- accel/accel.sh@20 -- # IFS=: 00:06:46.673 16:26:24 -- accel/accel.sh@20 -- # read -r var val 00:06:46.673 16:26:24 -- accel/accel.sh@21 -- # val=32 00:06:46.673 16:26:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.673 16:26:24 -- accel/accel.sh@20 -- # IFS=: 00:06:46.673 16:26:24 -- accel/accel.sh@20 -- # read -r var val 00:06:46.673 16:26:24 -- accel/accel.sh@21 -- # val=1 00:06:46.673 16:26:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.673 16:26:24 -- accel/accel.sh@20 -- # IFS=: 00:06:46.673 16:26:24 -- accel/accel.sh@20 -- # read -r var val 00:06:46.673 16:26:24 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:46.673 16:26:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.673 16:26:24 -- accel/accel.sh@20 -- # IFS=: 00:06:46.673 16:26:24 -- accel/accel.sh@20 -- # read -r var val 00:06:46.673 16:26:24 -- accel/accel.sh@21 -- # val=Yes 00:06:46.673 16:26:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.673 16:26:24 -- accel/accel.sh@20 -- # IFS=: 00:06:46.673 16:26:24 -- accel/accel.sh@20 -- # read -r var val 00:06:46.673 16:26:24 -- accel/accel.sh@21 -- # val= 00:06:46.673 16:26:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.673 16:26:24 -- accel/accel.sh@20 -- # IFS=: 00:06:46.673 16:26:24 -- accel/accel.sh@20 -- # read -r var val 00:06:46.673 16:26:24 -- accel/accel.sh@21 -- # val= 00:06:46.673 16:26:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.673 16:26:24 -- accel/accel.sh@20 -- # IFS=: 00:06:46.673 16:26:24 -- accel/accel.sh@20 -- # read -r var val 00:06:48.051 16:26:25 -- accel/accel.sh@21 -- # val= 00:06:48.051 16:26:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.051 16:26:25 -- accel/accel.sh@20 -- # IFS=: 00:06:48.051 16:26:25 -- accel/accel.sh@20 -- # read -r var val 00:06:48.051 16:26:25 -- accel/accel.sh@21 -- # val= 00:06:48.051 16:26:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.051 16:26:25 -- accel/accel.sh@20 -- # IFS=: 00:06:48.051 16:26:25 -- accel/accel.sh@20 -- # read -r var val 00:06:48.051 16:26:25 -- accel/accel.sh@21 -- # val= 00:06:48.051 16:26:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.051 16:26:25 -- accel/accel.sh@20 -- # IFS=: 00:06:48.051 16:26:25 -- accel/accel.sh@20 -- # read -r var val 00:06:48.051 16:26:25 -- accel/accel.sh@21 -- # val= 00:06:48.051 16:26:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.051 16:26:25 -- accel/accel.sh@20 -- # IFS=: 00:06:48.051 16:26:25 -- accel/accel.sh@20 -- # read -r var val 00:06:48.051 16:26:25 -- accel/accel.sh@21 -- # val= 00:06:48.051 16:26:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.051 16:26:25 -- accel/accel.sh@20 -- # IFS=: 00:06:48.051 16:26:25 -- accel/accel.sh@20 -- # read -r var val 00:06:48.051 16:26:25 -- accel/accel.sh@21 -- # val= 00:06:48.051 16:26:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.051 16:26:25 -- accel/accel.sh@20 -- # IFS=: 00:06:48.051 16:26:25 -- accel/accel.sh@20 -- # read -r var val 00:06:48.051 16:26:25 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:48.051 16:26:25 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:48.051 16:26:25 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:48.051 00:06:48.051 real 0m2.843s 00:06:48.051 user 0m2.408s 00:06:48.051 sys 0m0.236s 00:06:48.051 16:26:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:48.051 ************************************ 00:06:48.051 END TEST accel_xor 00:06:48.051 ************************************ 00:06:48.051 16:26:25 -- common/autotest_common.sh@10 -- # set +x 00:06:48.051 16:26:25 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:48.051 16:26:25 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:48.051 16:26:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:48.051 16:26:25 -- common/autotest_common.sh@10 -- # set +x 00:06:48.051 ************************************ 00:06:48.051 START TEST accel_xor 00:06:48.051 ************************************ 00:06:48.051 16:26:25 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3 00:06:48.051 16:26:25 -- accel/accel.sh@16 -- # local accel_opc 00:06:48.051 16:26:25 -- accel/accel.sh@17 -- # local accel_module 00:06:48.051 16:26:25 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:06:48.051 16:26:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:48.051 16:26:25 -- accel/accel.sh@12 -- # build_accel_config 00:06:48.051 16:26:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:48.051 16:26:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.051 16:26:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.051 16:26:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:48.051 16:26:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:48.052 16:26:25 -- accel/accel.sh@41 -- # local IFS=, 00:06:48.052 16:26:25 -- accel/accel.sh@42 -- # jq -r . 00:06:48.052 [2024-11-16 16:26:25.232240] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:48.052 [2024-11-16 16:26:25.232486] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70990 ] 00:06:48.052 [2024-11-16 16:26:25.368590] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.052 [2024-11-16 16:26:25.422474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.430 16:26:26 -- accel/accel.sh@18 -- # out=' 00:06:49.430 SPDK Configuration: 00:06:49.430 Core mask: 0x1 00:06:49.430 00:06:49.430 Accel Perf Configuration: 00:06:49.430 Workload Type: xor 00:06:49.430 Source buffers: 3 00:06:49.430 Transfer size: 4096 bytes 00:06:49.430 Vector count 1 00:06:49.430 Module: software 00:06:49.430 Queue depth: 32 00:06:49.430 Allocate depth: 32 00:06:49.430 # threads/core: 1 00:06:49.430 Run time: 1 seconds 00:06:49.430 Verify: Yes 00:06:49.430 00:06:49.430 Running for 1 seconds... 00:06:49.430 00:06:49.430 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:49.430 ------------------------------------------------------------------------------------ 00:06:49.430 0,0 277568/s 1084 MiB/s 0 0 00:06:49.430 ==================================================================================== 00:06:49.430 Total 277568/s 1084 MiB/s 0 0' 00:06:49.430 16:26:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:49.430 16:26:26 -- accel/accel.sh@20 -- # IFS=: 00:06:49.430 16:26:26 -- accel/accel.sh@20 -- # read -r var val 00:06:49.430 16:26:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:49.430 16:26:26 -- accel/accel.sh@12 -- # build_accel_config 00:06:49.430 16:26:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:49.430 16:26:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.430 16:26:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.430 16:26:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:49.430 16:26:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:49.430 16:26:26 -- accel/accel.sh@41 -- # local IFS=, 00:06:49.430 16:26:26 -- accel/accel.sh@42 -- # jq -r . 00:06:49.430 [2024-11-16 16:26:26.618014] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:49.430 [2024-11-16 16:26:26.618299] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71004 ] 00:06:49.430 [2024-11-16 16:26:26.741460] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.430 [2024-11-16 16:26:26.791118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.430 16:26:26 -- accel/accel.sh@21 -- # val= 00:06:49.430 16:26:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.430 16:26:26 -- accel/accel.sh@20 -- # IFS=: 00:06:49.430 16:26:26 -- accel/accel.sh@20 -- # read -r var val 00:06:49.430 16:26:26 -- accel/accel.sh@21 -- # val= 00:06:49.430 16:26:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.430 16:26:26 -- accel/accel.sh@20 -- # IFS=: 00:06:49.430 16:26:26 -- accel/accel.sh@20 -- # read -r var val 00:06:49.430 16:26:26 -- accel/accel.sh@21 -- # val=0x1 00:06:49.430 16:26:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.430 16:26:26 -- accel/accel.sh@20 -- # IFS=: 00:06:49.430 16:26:26 -- accel/accel.sh@20 -- # read -r var val 00:06:49.430 16:26:26 -- accel/accel.sh@21 -- # val= 00:06:49.430 16:26:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.430 16:26:26 -- accel/accel.sh@20 -- # IFS=: 00:06:49.430 16:26:26 -- accel/accel.sh@20 -- # read -r var val 00:06:49.430 16:26:26 -- accel/accel.sh@21 -- # val= 00:06:49.430 16:26:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.430 16:26:26 -- accel/accel.sh@20 -- # IFS=: 00:06:49.430 16:26:26 -- accel/accel.sh@20 -- # read -r var val 00:06:49.430 16:26:26 -- accel/accel.sh@21 -- # val=xor 00:06:49.430 16:26:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.430 16:26:26 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:49.430 16:26:26 -- accel/accel.sh@20 -- # IFS=: 00:06:49.430 16:26:26 -- accel/accel.sh@20 -- # read -r var val 00:06:49.430 16:26:26 -- accel/accel.sh@21 -- # val=3 00:06:49.430 16:26:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.430 16:26:26 -- accel/accel.sh@20 -- # IFS=: 00:06:49.430 16:26:26 -- accel/accel.sh@20 -- # read -r var val 00:06:49.430 16:26:26 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:49.430 16:26:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.430 16:26:26 -- accel/accel.sh@20 -- # IFS=: 00:06:49.430 16:26:26 -- accel/accel.sh@20 -- # read -r var val 00:06:49.430 16:26:26 -- accel/accel.sh@21 -- # val= 00:06:49.430 16:26:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.430 16:26:26 -- accel/accel.sh@20 -- # IFS=: 00:06:49.430 16:26:26 -- accel/accel.sh@20 -- # read -r var val 00:06:49.430 16:26:26 -- accel/accel.sh@21 -- # val=software 00:06:49.430 16:26:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.430 16:26:26 -- accel/accel.sh@23 -- # accel_module=software 00:06:49.430 16:26:26 -- accel/accel.sh@20 -- # IFS=: 00:06:49.430 16:26:26 -- accel/accel.sh@20 -- # read -r var val 00:06:49.430 16:26:26 -- accel/accel.sh@21 -- # val=32 00:06:49.430 16:26:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.431 16:26:26 -- accel/accel.sh@20 -- # IFS=: 00:06:49.431 16:26:26 -- accel/accel.sh@20 -- # read -r var val 00:06:49.431 16:26:26 -- accel/accel.sh@21 -- # val=32 00:06:49.431 16:26:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.431 16:26:26 -- accel/accel.sh@20 -- # IFS=: 00:06:49.431 16:26:26 -- accel/accel.sh@20 -- # read -r var val 00:06:49.431 16:26:26 -- accel/accel.sh@21 -- # val=1 00:06:49.431 16:26:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.431 16:26:26 -- accel/accel.sh@20 -- # IFS=: 00:06:49.431 16:26:26 -- accel/accel.sh@20 -- # read -r var val 00:06:49.431 16:26:26 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:49.431 16:26:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.431 16:26:26 -- accel/accel.sh@20 -- # IFS=: 00:06:49.431 16:26:26 -- accel/accel.sh@20 -- # read -r var val 00:06:49.431 16:26:26 -- accel/accel.sh@21 -- # val=Yes 00:06:49.431 16:26:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.431 16:26:26 -- accel/accel.sh@20 -- # IFS=: 00:06:49.431 16:26:26 -- accel/accel.sh@20 -- # read -r var val 00:06:49.431 16:26:26 -- accel/accel.sh@21 -- # val= 00:06:49.431 16:26:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.431 16:26:26 -- accel/accel.sh@20 -- # IFS=: 00:06:49.431 16:26:26 -- accel/accel.sh@20 -- # read -r var val 00:06:49.431 16:26:26 -- accel/accel.sh@21 -- # val= 00:06:49.431 16:26:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.431 16:26:26 -- accel/accel.sh@20 -- # IFS=: 00:06:49.431 16:26:26 -- accel/accel.sh@20 -- # read -r var val 00:06:50.809 16:26:27 -- accel/accel.sh@21 -- # val= 00:06:50.809 16:26:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.809 16:26:27 -- accel/accel.sh@20 -- # IFS=: 00:06:50.809 16:26:27 -- accel/accel.sh@20 -- # read -r var val 00:06:50.809 16:26:27 -- accel/accel.sh@21 -- # val= 00:06:50.809 16:26:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.809 16:26:27 -- accel/accel.sh@20 -- # IFS=: 00:06:50.809 16:26:27 -- accel/accel.sh@20 -- # read -r var val 00:06:50.809 16:26:27 -- accel/accel.sh@21 -- # val= 00:06:50.809 16:26:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.809 16:26:27 -- accel/accel.sh@20 -- # IFS=: 00:06:50.809 16:26:27 -- accel/accel.sh@20 -- # read -r var val 00:06:50.809 16:26:28 -- accel/accel.sh@21 -- # val= 00:06:50.809 16:26:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.809 16:26:28 -- accel/accel.sh@20 -- # IFS=: 00:06:50.809 16:26:28 -- accel/accel.sh@20 -- # read -r var val 00:06:50.809 16:26:28 -- accel/accel.sh@21 -- # val= 00:06:50.809 16:26:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.809 16:26:28 -- accel/accel.sh@20 -- # IFS=: 00:06:50.809 16:26:28 -- accel/accel.sh@20 -- # read -r var val 00:06:50.809 16:26:28 -- accel/accel.sh@21 -- # val= 00:06:50.809 16:26:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.809 16:26:28 -- accel/accel.sh@20 -- # IFS=: 00:06:50.809 16:26:28 -- accel/accel.sh@20 -- # read -r var val 00:06:50.809 16:26:28 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:50.809 16:26:28 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:50.809 16:26:28 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:50.809 00:06:50.809 real 0m2.798s 00:06:50.809 user 0m2.393s 00:06:50.809 sys 0m0.204s 00:06:50.809 16:26:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:50.809 16:26:28 -- common/autotest_common.sh@10 -- # set +x 00:06:50.809 ************************************ 00:06:50.809 END TEST accel_xor 00:06:50.809 ************************************ 00:06:50.809 16:26:28 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:50.809 16:26:28 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:50.809 16:26:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:50.809 16:26:28 -- common/autotest_common.sh@10 -- # set +x 00:06:50.809 ************************************ 00:06:50.809 START TEST accel_dif_verify 00:06:50.809 ************************************ 00:06:50.809 16:26:28 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify 00:06:50.809 16:26:28 -- accel/accel.sh@16 -- # local accel_opc 00:06:50.809 16:26:28 -- accel/accel.sh@17 -- # local accel_module 00:06:50.809 16:26:28 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:06:50.809 16:26:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:50.809 16:26:28 -- accel/accel.sh@12 -- # build_accel_config 00:06:50.809 16:26:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:50.809 16:26:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.809 16:26:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.809 16:26:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:50.809 16:26:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:50.809 16:26:28 -- accel/accel.sh@41 -- # local IFS=, 00:06:50.809 16:26:28 -- accel/accel.sh@42 -- # jq -r . 00:06:50.809 [2024-11-16 16:26:28.081938] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:50.809 [2024-11-16 16:26:28.082038] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71044 ] 00:06:50.809 [2024-11-16 16:26:28.218497] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.809 [2024-11-16 16:26:28.276681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.186 16:26:29 -- accel/accel.sh@18 -- # out=' 00:06:52.186 SPDK Configuration: 00:06:52.186 Core mask: 0x1 00:06:52.186 00:06:52.186 Accel Perf Configuration: 00:06:52.186 Workload Type: dif_verify 00:06:52.186 Vector size: 4096 bytes 00:06:52.186 Transfer size: 4096 bytes 00:06:52.186 Block size: 512 bytes 00:06:52.186 Metadata size: 8 bytes 00:06:52.186 Vector count 1 00:06:52.186 Module: software 00:06:52.187 Queue depth: 32 00:06:52.187 Allocate depth: 32 00:06:52.187 # threads/core: 1 00:06:52.187 Run time: 1 seconds 00:06:52.187 Verify: No 00:06:52.187 00:06:52.187 Running for 1 seconds... 00:06:52.187 00:06:52.187 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:52.187 ------------------------------------------------------------------------------------ 00:06:52.187 0,0 124032/s 492 MiB/s 0 0 00:06:52.187 ==================================================================================== 00:06:52.187 Total 124032/s 484 MiB/s 0 0' 00:06:52.187 16:26:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:52.187 16:26:29 -- accel/accel.sh@20 -- # IFS=: 00:06:52.187 16:26:29 -- accel/accel.sh@20 -- # read -r var val 00:06:52.187 16:26:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:52.187 16:26:29 -- accel/accel.sh@12 -- # build_accel_config 00:06:52.187 16:26:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:52.187 16:26:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.187 16:26:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.187 16:26:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:52.187 16:26:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:52.187 16:26:29 -- accel/accel.sh@41 -- # local IFS=, 00:06:52.187 16:26:29 -- accel/accel.sh@42 -- # jq -r . 00:06:52.187 [2024-11-16 16:26:29.474850] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:52.187 [2024-11-16 16:26:29.474941] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71058 ] 00:06:52.187 [2024-11-16 16:26:29.597832] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.187 [2024-11-16 16:26:29.649050] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.445 16:26:29 -- accel/accel.sh@21 -- # val= 00:06:52.445 16:26:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.445 16:26:29 -- accel/accel.sh@20 -- # IFS=: 00:06:52.445 16:26:29 -- accel/accel.sh@20 -- # read -r var val 00:06:52.445 16:26:29 -- accel/accel.sh@21 -- # val= 00:06:52.446 16:26:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.446 16:26:29 -- accel/accel.sh@20 -- # IFS=: 00:06:52.446 16:26:29 -- accel/accel.sh@20 -- # read -r var val 00:06:52.446 16:26:29 -- accel/accel.sh@21 -- # val=0x1 00:06:52.446 16:26:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.446 16:26:29 -- accel/accel.sh@20 -- # IFS=: 00:06:52.446 16:26:29 -- accel/accel.sh@20 -- # read -r var val 00:06:52.446 16:26:29 -- accel/accel.sh@21 -- # val= 00:06:52.446 16:26:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.446 16:26:29 -- accel/accel.sh@20 -- # IFS=: 00:06:52.446 16:26:29 -- accel/accel.sh@20 -- # read -r var val 00:06:52.446 16:26:29 -- accel/accel.sh@21 -- # val= 00:06:52.446 16:26:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.446 16:26:29 -- accel/accel.sh@20 -- # IFS=: 00:06:52.446 16:26:29 -- accel/accel.sh@20 -- # read -r var val 00:06:52.446 16:26:29 -- accel/accel.sh@21 -- # val=dif_verify 00:06:52.446 16:26:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.446 16:26:29 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:06:52.446 16:26:29 -- accel/accel.sh@20 -- # IFS=: 00:06:52.446 16:26:29 -- accel/accel.sh@20 -- # read -r var val 00:06:52.446 16:26:29 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:52.446 16:26:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.446 16:26:29 -- accel/accel.sh@20 -- # IFS=: 00:06:52.446 16:26:29 -- accel/accel.sh@20 -- # read -r var val 00:06:52.446 16:26:29 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:52.446 16:26:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.446 16:26:29 -- accel/accel.sh@20 -- # IFS=: 00:06:52.446 16:26:29 -- accel/accel.sh@20 -- # read -r var val 00:06:52.446 16:26:29 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:52.446 16:26:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.446 16:26:29 -- accel/accel.sh@20 -- # IFS=: 00:06:52.446 16:26:29 -- accel/accel.sh@20 -- # read -r var val 00:06:52.446 16:26:29 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:52.446 16:26:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.446 16:26:29 -- accel/accel.sh@20 -- # IFS=: 00:06:52.446 16:26:29 -- accel/accel.sh@20 -- # read -r var val 00:06:52.446 16:26:29 -- accel/accel.sh@21 -- # val= 00:06:52.446 16:26:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.446 16:26:29 -- accel/accel.sh@20 -- # IFS=: 00:06:52.446 16:26:29 -- accel/accel.sh@20 -- # read -r var val 00:06:52.446 16:26:29 -- accel/accel.sh@21 -- # val=software 00:06:52.446 16:26:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.446 16:26:29 -- accel/accel.sh@23 -- # accel_module=software 00:06:52.446 16:26:29 -- accel/accel.sh@20 -- # IFS=: 00:06:52.446 16:26:29 -- accel/accel.sh@20 -- # read -r var val 00:06:52.446 16:26:29 -- accel/accel.sh@21 -- # val=32 00:06:52.446 16:26:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.446 16:26:29 -- accel/accel.sh@20 -- # IFS=: 00:06:52.446 16:26:29 -- accel/accel.sh@20 -- # read -r var val 00:06:52.446 16:26:29 -- accel/accel.sh@21 -- # val=32 00:06:52.446 16:26:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.446 16:26:29 -- accel/accel.sh@20 -- # IFS=: 00:06:52.446 16:26:29 -- accel/accel.sh@20 -- # read -r var val 00:06:52.446 16:26:29 -- accel/accel.sh@21 -- # val=1 00:06:52.446 16:26:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.446 16:26:29 -- accel/accel.sh@20 -- # IFS=: 00:06:52.446 16:26:29 -- accel/accel.sh@20 -- # read -r var val 00:06:52.446 16:26:29 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:52.446 16:26:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.446 16:26:29 -- accel/accel.sh@20 -- # IFS=: 00:06:52.446 16:26:29 -- accel/accel.sh@20 -- # read -r var val 00:06:52.446 16:26:29 -- accel/accel.sh@21 -- # val=No 00:06:52.446 16:26:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.446 16:26:29 -- accel/accel.sh@20 -- # IFS=: 00:06:52.446 16:26:29 -- accel/accel.sh@20 -- # read -r var val 00:06:52.446 16:26:29 -- accel/accel.sh@21 -- # val= 00:06:52.446 16:26:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.446 16:26:29 -- accel/accel.sh@20 -- # IFS=: 00:06:52.446 16:26:29 -- accel/accel.sh@20 -- # read -r var val 00:06:52.446 16:26:29 -- accel/accel.sh@21 -- # val= 00:06:52.446 16:26:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.446 16:26:29 -- accel/accel.sh@20 -- # IFS=: 00:06:52.446 16:26:29 -- accel/accel.sh@20 -- # read -r var val 00:06:53.382 16:26:30 -- accel/accel.sh@21 -- # val= 00:06:53.382 16:26:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.382 16:26:30 -- accel/accel.sh@20 -- # IFS=: 00:06:53.382 16:26:30 -- accel/accel.sh@20 -- # read -r var val 00:06:53.382 16:26:30 -- accel/accel.sh@21 -- # val= 00:06:53.382 16:26:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.382 16:26:30 -- accel/accel.sh@20 -- # IFS=: 00:06:53.382 16:26:30 -- accel/accel.sh@20 -- # read -r var val 00:06:53.382 16:26:30 -- accel/accel.sh@21 -- # val= 00:06:53.382 16:26:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.382 16:26:30 -- accel/accel.sh@20 -- # IFS=: 00:06:53.382 16:26:30 -- accel/accel.sh@20 -- # read -r var val 00:06:53.382 16:26:30 -- accel/accel.sh@21 -- # val= 00:06:53.382 16:26:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.382 16:26:30 -- accel/accel.sh@20 -- # IFS=: 00:06:53.382 16:26:30 -- accel/accel.sh@20 -- # read -r var val 00:06:53.382 16:26:30 -- accel/accel.sh@21 -- # val= 00:06:53.382 16:26:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.382 16:26:30 -- accel/accel.sh@20 -- # IFS=: 00:06:53.382 16:26:30 -- accel/accel.sh@20 -- # read -r var val 00:06:53.382 16:26:30 -- accel/accel.sh@21 -- # val= 00:06:53.382 16:26:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.382 16:26:30 -- accel/accel.sh@20 -- # IFS=: 00:06:53.382 16:26:30 -- accel/accel.sh@20 -- # read -r var val 00:06:53.382 ************************************ 00:06:53.382 END TEST accel_dif_verify 00:06:53.382 ************************************ 00:06:53.382 16:26:30 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:53.382 16:26:30 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:06:53.382 16:26:30 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:53.382 00:06:53.382 real 0m2.776s 00:06:53.382 user 0m2.361s 00:06:53.382 sys 0m0.217s 00:06:53.382 16:26:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:53.382 16:26:30 -- common/autotest_common.sh@10 -- # set +x 00:06:53.641 16:26:30 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:53.641 16:26:30 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:53.641 16:26:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:53.641 16:26:30 -- common/autotest_common.sh@10 -- # set +x 00:06:53.641 ************************************ 00:06:53.641 START TEST accel_dif_generate 00:06:53.641 ************************************ 00:06:53.641 16:26:30 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate 00:06:53.641 16:26:30 -- accel/accel.sh@16 -- # local accel_opc 00:06:53.641 16:26:30 -- accel/accel.sh@17 -- # local accel_module 00:06:53.641 16:26:30 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:06:53.641 16:26:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:53.641 16:26:30 -- accel/accel.sh@12 -- # build_accel_config 00:06:53.641 16:26:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:53.641 16:26:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.641 16:26:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.641 16:26:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:53.641 16:26:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:53.641 16:26:30 -- accel/accel.sh@41 -- # local IFS=, 00:06:53.641 16:26:30 -- accel/accel.sh@42 -- # jq -r . 00:06:53.641 [2024-11-16 16:26:30.906225] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:53.641 [2024-11-16 16:26:30.906328] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71087 ] 00:06:53.641 [2024-11-16 16:26:31.039462] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.641 [2024-11-16 16:26:31.099639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.020 16:26:32 -- accel/accel.sh@18 -- # out=' 00:06:55.020 SPDK Configuration: 00:06:55.020 Core mask: 0x1 00:06:55.020 00:06:55.020 Accel Perf Configuration: 00:06:55.020 Workload Type: dif_generate 00:06:55.020 Vector size: 4096 bytes 00:06:55.020 Transfer size: 4096 bytes 00:06:55.020 Block size: 512 bytes 00:06:55.020 Metadata size: 8 bytes 00:06:55.020 Vector count 1 00:06:55.020 Module: software 00:06:55.020 Queue depth: 32 00:06:55.020 Allocate depth: 32 00:06:55.020 # threads/core: 1 00:06:55.020 Run time: 1 seconds 00:06:55.020 Verify: No 00:06:55.020 00:06:55.020 Running for 1 seconds... 00:06:55.020 00:06:55.020 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:55.020 ------------------------------------------------------------------------------------ 00:06:55.020 0,0 152320/s 604 MiB/s 0 0 00:06:55.020 ==================================================================================== 00:06:55.020 Total 152320/s 595 MiB/s 0 0' 00:06:55.020 16:26:32 -- accel/accel.sh@20 -- # IFS=: 00:06:55.020 16:26:32 -- accel/accel.sh@20 -- # read -r var val 00:06:55.020 16:26:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:55.020 16:26:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:55.020 16:26:32 -- accel/accel.sh@12 -- # build_accel_config 00:06:55.020 16:26:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:55.020 16:26:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.020 16:26:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.020 16:26:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:55.020 16:26:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:55.020 16:26:32 -- accel/accel.sh@41 -- # local IFS=, 00:06:55.020 16:26:32 -- accel/accel.sh@42 -- # jq -r . 00:06:55.020 [2024-11-16 16:26:32.311945] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:55.020 [2024-11-16 16:26:32.312038] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71112 ] 00:06:55.020 [2024-11-16 16:26:32.448783] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.020 [2024-11-16 16:26:32.504087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.282 16:26:32 -- accel/accel.sh@21 -- # val= 00:06:55.282 16:26:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.282 16:26:32 -- accel/accel.sh@20 -- # IFS=: 00:06:55.282 16:26:32 -- accel/accel.sh@20 -- # read -r var val 00:06:55.282 16:26:32 -- accel/accel.sh@21 -- # val= 00:06:55.282 16:26:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.282 16:26:32 -- accel/accel.sh@20 -- # IFS=: 00:06:55.282 16:26:32 -- accel/accel.sh@20 -- # read -r var val 00:06:55.282 16:26:32 -- accel/accel.sh@21 -- # val=0x1 00:06:55.282 16:26:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.282 16:26:32 -- accel/accel.sh@20 -- # IFS=: 00:06:55.282 16:26:32 -- accel/accel.sh@20 -- # read -r var val 00:06:55.282 16:26:32 -- accel/accel.sh@21 -- # val= 00:06:55.282 16:26:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.282 16:26:32 -- accel/accel.sh@20 -- # IFS=: 00:06:55.282 16:26:32 -- accel/accel.sh@20 -- # read -r var val 00:06:55.282 16:26:32 -- accel/accel.sh@21 -- # val= 00:06:55.282 16:26:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.282 16:26:32 -- accel/accel.sh@20 -- # IFS=: 00:06:55.282 16:26:32 -- accel/accel.sh@20 -- # read -r var val 00:06:55.282 16:26:32 -- accel/accel.sh@21 -- # val=dif_generate 00:06:55.282 16:26:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.282 16:26:32 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:06:55.282 16:26:32 -- accel/accel.sh@20 -- # IFS=: 00:06:55.282 16:26:32 -- accel/accel.sh@20 -- # read -r var val 00:06:55.282 16:26:32 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:55.282 16:26:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.282 16:26:32 -- accel/accel.sh@20 -- # IFS=: 00:06:55.282 16:26:32 -- accel/accel.sh@20 -- # read -r var val 00:06:55.282 16:26:32 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:55.282 16:26:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.282 16:26:32 -- accel/accel.sh@20 -- # IFS=: 00:06:55.282 16:26:32 -- accel/accel.sh@20 -- # read -r var val 00:06:55.282 16:26:32 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:55.282 16:26:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.282 16:26:32 -- accel/accel.sh@20 -- # IFS=: 00:06:55.282 16:26:32 -- accel/accel.sh@20 -- # read -r var val 00:06:55.282 16:26:32 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:55.282 16:26:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.282 16:26:32 -- accel/accel.sh@20 -- # IFS=: 00:06:55.282 16:26:32 -- accel/accel.sh@20 -- # read -r var val 00:06:55.282 16:26:32 -- accel/accel.sh@21 -- # val= 00:06:55.282 16:26:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.282 16:26:32 -- accel/accel.sh@20 -- # IFS=: 00:06:55.282 16:26:32 -- accel/accel.sh@20 -- # read -r var val 00:06:55.282 16:26:32 -- accel/accel.sh@21 -- # val=software 00:06:55.282 16:26:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.282 16:26:32 -- accel/accel.sh@23 -- # accel_module=software 00:06:55.282 16:26:32 -- accel/accel.sh@20 -- # IFS=: 00:06:55.282 16:26:32 -- accel/accel.sh@20 -- # read -r var val 00:06:55.282 16:26:32 -- accel/accel.sh@21 -- # val=32 00:06:55.282 16:26:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.282 16:26:32 -- accel/accel.sh@20 -- # IFS=: 00:06:55.282 16:26:32 -- accel/accel.sh@20 -- # read -r var val 00:06:55.282 16:26:32 -- accel/accel.sh@21 -- # val=32 00:06:55.282 16:26:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.282 16:26:32 -- accel/accel.sh@20 -- # IFS=: 00:06:55.282 16:26:32 -- accel/accel.sh@20 -- # read -r var val 00:06:55.282 16:26:32 -- accel/accel.sh@21 -- # val=1 00:06:55.282 16:26:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.282 16:26:32 -- accel/accel.sh@20 -- # IFS=: 00:06:55.282 16:26:32 -- accel/accel.sh@20 -- # read -r var val 00:06:55.282 16:26:32 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:55.282 16:26:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.282 16:26:32 -- accel/accel.sh@20 -- # IFS=: 00:06:55.282 16:26:32 -- accel/accel.sh@20 -- # read -r var val 00:06:55.282 16:26:32 -- accel/accel.sh@21 -- # val=No 00:06:55.282 16:26:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.282 16:26:32 -- accel/accel.sh@20 -- # IFS=: 00:06:55.282 16:26:32 -- accel/accel.sh@20 -- # read -r var val 00:06:55.282 16:26:32 -- accel/accel.sh@21 -- # val= 00:06:55.282 16:26:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.282 16:26:32 -- accel/accel.sh@20 -- # IFS=: 00:06:55.282 16:26:32 -- accel/accel.sh@20 -- # read -r var val 00:06:55.282 16:26:32 -- accel/accel.sh@21 -- # val= 00:06:55.282 16:26:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.282 16:26:32 -- accel/accel.sh@20 -- # IFS=: 00:06:55.282 16:26:32 -- accel/accel.sh@20 -- # read -r var val 00:06:56.219 16:26:33 -- accel/accel.sh@21 -- # val= 00:06:56.219 16:26:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.219 16:26:33 -- accel/accel.sh@20 -- # IFS=: 00:06:56.219 16:26:33 -- accel/accel.sh@20 -- # read -r var val 00:06:56.219 16:26:33 -- accel/accel.sh@21 -- # val= 00:06:56.219 16:26:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.219 16:26:33 -- accel/accel.sh@20 -- # IFS=: 00:06:56.219 16:26:33 -- accel/accel.sh@20 -- # read -r var val 00:06:56.219 16:26:33 -- accel/accel.sh@21 -- # val= 00:06:56.219 16:26:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.219 16:26:33 -- accel/accel.sh@20 -- # IFS=: 00:06:56.219 16:26:33 -- accel/accel.sh@20 -- # read -r var val 00:06:56.219 16:26:33 -- accel/accel.sh@21 -- # val= 00:06:56.219 16:26:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.219 16:26:33 -- accel/accel.sh@20 -- # IFS=: 00:06:56.219 16:26:33 -- accel/accel.sh@20 -- # read -r var val 00:06:56.219 16:26:33 -- accel/accel.sh@21 -- # val= 00:06:56.219 16:26:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.219 16:26:33 -- accel/accel.sh@20 -- # IFS=: 00:06:56.219 16:26:33 -- accel/accel.sh@20 -- # read -r var val 00:06:56.219 16:26:33 -- accel/accel.sh@21 -- # val= 00:06:56.219 16:26:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.219 16:26:33 -- accel/accel.sh@20 -- # IFS=: 00:06:56.219 16:26:33 -- accel/accel.sh@20 -- # read -r var val 00:06:56.219 ************************************ 00:06:56.219 END TEST accel_dif_generate 00:06:56.219 ************************************ 00:06:56.219 16:26:33 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:56.219 16:26:33 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:06:56.219 16:26:33 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:56.219 00:06:56.219 real 0m2.806s 00:06:56.219 user 0m2.377s 00:06:56.219 sys 0m0.230s 00:06:56.219 16:26:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:56.219 16:26:33 -- common/autotest_common.sh@10 -- # set +x 00:06:56.479 16:26:33 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:56.479 16:26:33 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:56.479 16:26:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:56.479 16:26:33 -- common/autotest_common.sh@10 -- # set +x 00:06:56.479 ************************************ 00:06:56.479 START TEST accel_dif_generate_copy 00:06:56.479 ************************************ 00:06:56.479 16:26:33 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy 00:06:56.479 16:26:33 -- accel/accel.sh@16 -- # local accel_opc 00:06:56.479 16:26:33 -- accel/accel.sh@17 -- # local accel_module 00:06:56.479 16:26:33 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:06:56.479 16:26:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:56.479 16:26:33 -- accel/accel.sh@12 -- # build_accel_config 00:06:56.479 16:26:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:56.479 16:26:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.479 16:26:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.479 16:26:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:56.479 16:26:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:56.479 16:26:33 -- accel/accel.sh@41 -- # local IFS=, 00:06:56.479 16:26:33 -- accel/accel.sh@42 -- # jq -r . 00:06:56.479 [2024-11-16 16:26:33.766917] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:56.479 [2024-11-16 16:26:33.767011] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71141 ] 00:06:56.479 [2024-11-16 16:26:33.904297] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.479 [2024-11-16 16:26:33.959304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.857 16:26:35 -- accel/accel.sh@18 -- # out=' 00:06:57.857 SPDK Configuration: 00:06:57.857 Core mask: 0x1 00:06:57.857 00:06:57.857 Accel Perf Configuration: 00:06:57.857 Workload Type: dif_generate_copy 00:06:57.857 Vector size: 4096 bytes 00:06:57.857 Transfer size: 4096 bytes 00:06:57.857 Vector count 1 00:06:57.857 Module: software 00:06:57.857 Queue depth: 32 00:06:57.857 Allocate depth: 32 00:06:57.857 # threads/core: 1 00:06:57.857 Run time: 1 seconds 00:06:57.857 Verify: No 00:06:57.857 00:06:57.857 Running for 1 seconds... 00:06:57.857 00:06:57.857 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:57.857 ------------------------------------------------------------------------------------ 00:06:57.857 0,0 116416/s 461 MiB/s 0 0 00:06:57.857 ==================================================================================== 00:06:57.857 Total 116416/s 454 MiB/s 0 0' 00:06:57.857 16:26:35 -- accel/accel.sh@20 -- # IFS=: 00:06:57.857 16:26:35 -- accel/accel.sh@20 -- # read -r var val 00:06:57.857 16:26:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:57.857 16:26:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:57.857 16:26:35 -- accel/accel.sh@12 -- # build_accel_config 00:06:57.857 16:26:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:57.857 16:26:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.857 16:26:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.857 16:26:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:57.857 16:26:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:57.857 16:26:35 -- accel/accel.sh@41 -- # local IFS=, 00:06:57.857 16:26:35 -- accel/accel.sh@42 -- # jq -r . 00:06:57.857 [2024-11-16 16:26:35.166518] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:57.857 [2024-11-16 16:26:35.166780] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71155 ] 00:06:57.857 [2024-11-16 16:26:35.304320] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.116 [2024-11-16 16:26:35.363576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.116 16:26:35 -- accel/accel.sh@21 -- # val= 00:06:58.116 16:26:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.116 16:26:35 -- accel/accel.sh@20 -- # IFS=: 00:06:58.116 16:26:35 -- accel/accel.sh@20 -- # read -r var val 00:06:58.116 16:26:35 -- accel/accel.sh@21 -- # val= 00:06:58.116 16:26:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.116 16:26:35 -- accel/accel.sh@20 -- # IFS=: 00:06:58.116 16:26:35 -- accel/accel.sh@20 -- # read -r var val 00:06:58.116 16:26:35 -- accel/accel.sh@21 -- # val=0x1 00:06:58.116 16:26:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.116 16:26:35 -- accel/accel.sh@20 -- # IFS=: 00:06:58.116 16:26:35 -- accel/accel.sh@20 -- # read -r var val 00:06:58.116 16:26:35 -- accel/accel.sh@21 -- # val= 00:06:58.116 16:26:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.116 16:26:35 -- accel/accel.sh@20 -- # IFS=: 00:06:58.116 16:26:35 -- accel/accel.sh@20 -- # read -r var val 00:06:58.116 16:26:35 -- accel/accel.sh@21 -- # val= 00:06:58.116 16:26:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.116 16:26:35 -- accel/accel.sh@20 -- # IFS=: 00:06:58.116 16:26:35 -- accel/accel.sh@20 -- # read -r var val 00:06:58.116 16:26:35 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:06:58.116 16:26:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.116 16:26:35 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:06:58.116 16:26:35 -- accel/accel.sh@20 -- # IFS=: 00:06:58.116 16:26:35 -- accel/accel.sh@20 -- # read -r var val 00:06:58.116 16:26:35 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:58.116 16:26:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.116 16:26:35 -- accel/accel.sh@20 -- # IFS=: 00:06:58.116 16:26:35 -- accel/accel.sh@20 -- # read -r var val 00:06:58.116 16:26:35 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:58.116 16:26:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.116 16:26:35 -- accel/accel.sh@20 -- # IFS=: 00:06:58.116 16:26:35 -- accel/accel.sh@20 -- # read -r var val 00:06:58.116 16:26:35 -- accel/accel.sh@21 -- # val= 00:06:58.116 16:26:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.116 16:26:35 -- accel/accel.sh@20 -- # IFS=: 00:06:58.116 16:26:35 -- accel/accel.sh@20 -- # read -r var val 00:06:58.116 16:26:35 -- accel/accel.sh@21 -- # val=software 00:06:58.116 16:26:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.116 16:26:35 -- accel/accel.sh@23 -- # accel_module=software 00:06:58.116 16:26:35 -- accel/accel.sh@20 -- # IFS=: 00:06:58.116 16:26:35 -- accel/accel.sh@20 -- # read -r var val 00:06:58.116 16:26:35 -- accel/accel.sh@21 -- # val=32 00:06:58.116 16:26:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.116 16:26:35 -- accel/accel.sh@20 -- # IFS=: 00:06:58.116 16:26:35 -- accel/accel.sh@20 -- # read -r var val 00:06:58.116 16:26:35 -- accel/accel.sh@21 -- # val=32 00:06:58.116 16:26:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.116 16:26:35 -- accel/accel.sh@20 -- # IFS=: 00:06:58.116 16:26:35 -- accel/accel.sh@20 -- # read -r var val 00:06:58.116 16:26:35 -- accel/accel.sh@21 -- # val=1 00:06:58.116 16:26:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.116 16:26:35 -- accel/accel.sh@20 -- # IFS=: 00:06:58.116 16:26:35 -- accel/accel.sh@20 -- # read -r var val 00:06:58.116 16:26:35 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:58.116 16:26:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.116 16:26:35 -- accel/accel.sh@20 -- # IFS=: 00:06:58.116 16:26:35 -- accel/accel.sh@20 -- # read -r var val 00:06:58.116 16:26:35 -- accel/accel.sh@21 -- # val=No 00:06:58.116 16:26:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.116 16:26:35 -- accel/accel.sh@20 -- # IFS=: 00:06:58.116 16:26:35 -- accel/accel.sh@20 -- # read -r var val 00:06:58.116 16:26:35 -- accel/accel.sh@21 -- # val= 00:06:58.116 16:26:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.116 16:26:35 -- accel/accel.sh@20 -- # IFS=: 00:06:58.116 16:26:35 -- accel/accel.sh@20 -- # read -r var val 00:06:58.116 16:26:35 -- accel/accel.sh@21 -- # val= 00:06:58.116 16:26:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.116 16:26:35 -- accel/accel.sh@20 -- # IFS=: 00:06:58.116 16:26:35 -- accel/accel.sh@20 -- # read -r var val 00:06:59.494 16:26:36 -- accel/accel.sh@21 -- # val= 00:06:59.494 16:26:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.494 16:26:36 -- accel/accel.sh@20 -- # IFS=: 00:06:59.494 16:26:36 -- accel/accel.sh@20 -- # read -r var val 00:06:59.494 16:26:36 -- accel/accel.sh@21 -- # val= 00:06:59.494 16:26:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.494 16:26:36 -- accel/accel.sh@20 -- # IFS=: 00:06:59.494 16:26:36 -- accel/accel.sh@20 -- # read -r var val 00:06:59.494 16:26:36 -- accel/accel.sh@21 -- # val= 00:06:59.494 16:26:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.494 16:26:36 -- accel/accel.sh@20 -- # IFS=: 00:06:59.494 16:26:36 -- accel/accel.sh@20 -- # read -r var val 00:06:59.494 16:26:36 -- accel/accel.sh@21 -- # val= 00:06:59.494 16:26:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.494 16:26:36 -- accel/accel.sh@20 -- # IFS=: 00:06:59.494 16:26:36 -- accel/accel.sh@20 -- # read -r var val 00:06:59.494 16:26:36 -- accel/accel.sh@21 -- # val= 00:06:59.494 16:26:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.494 16:26:36 -- accel/accel.sh@20 -- # IFS=: 00:06:59.494 16:26:36 -- accel/accel.sh@20 -- # read -r var val 00:06:59.494 16:26:36 -- accel/accel.sh@21 -- # val= 00:06:59.494 16:26:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.494 16:26:36 -- accel/accel.sh@20 -- # IFS=: 00:06:59.494 16:26:36 -- accel/accel.sh@20 -- # read -r var val 00:06:59.494 16:26:36 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:59.494 16:26:36 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:06:59.494 ************************************ 00:06:59.494 END TEST accel_dif_generate_copy 00:06:59.494 ************************************ 00:06:59.494 16:26:36 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:59.494 00:06:59.494 real 0m2.806s 00:06:59.494 user 0m2.386s 00:06:59.494 sys 0m0.217s 00:06:59.494 16:26:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:59.494 16:26:36 -- common/autotest_common.sh@10 -- # set +x 00:06:59.494 16:26:36 -- accel/accel.sh@107 -- # [[ y == y ]] 00:06:59.494 16:26:36 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:59.494 16:26:36 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:59.494 16:26:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:59.494 16:26:36 -- common/autotest_common.sh@10 -- # set +x 00:06:59.494 ************************************ 00:06:59.494 START TEST accel_comp 00:06:59.494 ************************************ 00:06:59.494 16:26:36 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:59.494 16:26:36 -- accel/accel.sh@16 -- # local accel_opc 00:06:59.494 16:26:36 -- accel/accel.sh@17 -- # local accel_module 00:06:59.494 16:26:36 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:59.494 16:26:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:59.494 16:26:36 -- accel/accel.sh@12 -- # build_accel_config 00:06:59.494 16:26:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:59.494 16:26:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.494 16:26:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.494 16:26:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:59.494 16:26:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:59.494 16:26:36 -- accel/accel.sh@41 -- # local IFS=, 00:06:59.494 16:26:36 -- accel/accel.sh@42 -- # jq -r . 00:06:59.494 [2024-11-16 16:26:36.626221] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:59.494 [2024-11-16 16:26:36.626308] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71195 ] 00:06:59.494 [2024-11-16 16:26:36.754840] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.494 [2024-11-16 16:26:36.809650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.871 16:26:37 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:00.871 00:07:00.871 SPDK Configuration: 00:07:00.871 Core mask: 0x1 00:07:00.871 00:07:00.871 Accel Perf Configuration: 00:07:00.871 Workload Type: compress 00:07:00.871 Transfer size: 4096 bytes 00:07:00.871 Vector count 1 00:07:00.871 Module: software 00:07:00.871 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:00.871 Queue depth: 32 00:07:00.871 Allocate depth: 32 00:07:00.871 # threads/core: 1 00:07:00.871 Run time: 1 seconds 00:07:00.871 Verify: No 00:07:00.871 00:07:00.871 Running for 1 seconds... 00:07:00.871 00:07:00.871 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:00.871 ------------------------------------------------------------------------------------ 00:07:00.871 0,0 59136/s 246 MiB/s 0 0 00:07:00.871 ==================================================================================== 00:07:00.871 Total 59136/s 231 MiB/s 0 0' 00:07:00.871 16:26:37 -- accel/accel.sh@20 -- # IFS=: 00:07:00.871 16:26:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:00.871 16:26:37 -- accel/accel.sh@20 -- # read -r var val 00:07:00.871 16:26:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:00.871 16:26:37 -- accel/accel.sh@12 -- # build_accel_config 00:07:00.871 16:26:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:00.871 16:26:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.871 16:26:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.871 16:26:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:00.871 16:26:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:00.871 16:26:37 -- accel/accel.sh@41 -- # local IFS=, 00:07:00.871 16:26:37 -- accel/accel.sh@42 -- # jq -r . 00:07:00.871 [2024-11-16 16:26:38.017209] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:00.871 [2024-11-16 16:26:38.017297] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71209 ] 00:07:00.871 [2024-11-16 16:26:38.156389] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.871 [2024-11-16 16:26:38.217863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.871 16:26:38 -- accel/accel.sh@21 -- # val= 00:07:00.871 16:26:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.871 16:26:38 -- accel/accel.sh@20 -- # IFS=: 00:07:00.871 16:26:38 -- accel/accel.sh@20 -- # read -r var val 00:07:00.871 16:26:38 -- accel/accel.sh@21 -- # val= 00:07:00.871 16:26:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.871 16:26:38 -- accel/accel.sh@20 -- # IFS=: 00:07:00.871 16:26:38 -- accel/accel.sh@20 -- # read -r var val 00:07:00.871 16:26:38 -- accel/accel.sh@21 -- # val= 00:07:00.871 16:26:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.871 16:26:38 -- accel/accel.sh@20 -- # IFS=: 00:07:00.871 16:26:38 -- accel/accel.sh@20 -- # read -r var val 00:07:00.871 16:26:38 -- accel/accel.sh@21 -- # val=0x1 00:07:00.871 16:26:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.871 16:26:38 -- accel/accel.sh@20 -- # IFS=: 00:07:00.871 16:26:38 -- accel/accel.sh@20 -- # read -r var val 00:07:00.871 16:26:38 -- accel/accel.sh@21 -- # val= 00:07:00.871 16:26:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.871 16:26:38 -- accel/accel.sh@20 -- # IFS=: 00:07:00.871 16:26:38 -- accel/accel.sh@20 -- # read -r var val 00:07:00.871 16:26:38 -- accel/accel.sh@21 -- # val= 00:07:00.871 16:26:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.871 16:26:38 -- accel/accel.sh@20 -- # IFS=: 00:07:00.871 16:26:38 -- accel/accel.sh@20 -- # read -r var val 00:07:00.871 16:26:38 -- accel/accel.sh@21 -- # val=compress 00:07:00.871 16:26:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.871 16:26:38 -- accel/accel.sh@24 -- # accel_opc=compress 00:07:00.871 16:26:38 -- accel/accel.sh@20 -- # IFS=: 00:07:00.871 16:26:38 -- accel/accel.sh@20 -- # read -r var val 00:07:00.871 16:26:38 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:00.871 16:26:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.871 16:26:38 -- accel/accel.sh@20 -- # IFS=: 00:07:00.871 16:26:38 -- accel/accel.sh@20 -- # read -r var val 00:07:00.871 16:26:38 -- accel/accel.sh@21 -- # val= 00:07:00.871 16:26:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.871 16:26:38 -- accel/accel.sh@20 -- # IFS=: 00:07:00.871 16:26:38 -- accel/accel.sh@20 -- # read -r var val 00:07:00.871 16:26:38 -- accel/accel.sh@21 -- # val=software 00:07:00.871 16:26:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.871 16:26:38 -- accel/accel.sh@23 -- # accel_module=software 00:07:00.871 16:26:38 -- accel/accel.sh@20 -- # IFS=: 00:07:00.871 16:26:38 -- accel/accel.sh@20 -- # read -r var val 00:07:00.871 16:26:38 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:00.871 16:26:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.871 16:26:38 -- accel/accel.sh@20 -- # IFS=: 00:07:00.871 16:26:38 -- accel/accel.sh@20 -- # read -r var val 00:07:00.872 16:26:38 -- accel/accel.sh@21 -- # val=32 00:07:00.872 16:26:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.872 16:26:38 -- accel/accel.sh@20 -- # IFS=: 00:07:00.872 16:26:38 -- accel/accel.sh@20 -- # read -r var val 00:07:00.872 16:26:38 -- accel/accel.sh@21 -- # val=32 00:07:00.872 16:26:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.872 16:26:38 -- accel/accel.sh@20 -- # IFS=: 00:07:00.872 16:26:38 -- accel/accel.sh@20 -- # read -r var val 00:07:00.872 16:26:38 -- accel/accel.sh@21 -- # val=1 00:07:00.872 16:26:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.872 16:26:38 -- accel/accel.sh@20 -- # IFS=: 00:07:00.872 16:26:38 -- accel/accel.sh@20 -- # read -r var val 00:07:00.872 16:26:38 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:00.872 16:26:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.872 16:26:38 -- accel/accel.sh@20 -- # IFS=: 00:07:00.872 16:26:38 -- accel/accel.sh@20 -- # read -r var val 00:07:00.872 16:26:38 -- accel/accel.sh@21 -- # val=No 00:07:00.872 16:26:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.872 16:26:38 -- accel/accel.sh@20 -- # IFS=: 00:07:00.872 16:26:38 -- accel/accel.sh@20 -- # read -r var val 00:07:00.872 16:26:38 -- accel/accel.sh@21 -- # val= 00:07:00.872 16:26:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.872 16:26:38 -- accel/accel.sh@20 -- # IFS=: 00:07:00.872 16:26:38 -- accel/accel.sh@20 -- # read -r var val 00:07:00.872 16:26:38 -- accel/accel.sh@21 -- # val= 00:07:00.872 16:26:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.872 16:26:38 -- accel/accel.sh@20 -- # IFS=: 00:07:00.872 16:26:38 -- accel/accel.sh@20 -- # read -r var val 00:07:02.250 16:26:39 -- accel/accel.sh@21 -- # val= 00:07:02.250 16:26:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.250 16:26:39 -- accel/accel.sh@20 -- # IFS=: 00:07:02.250 16:26:39 -- accel/accel.sh@20 -- # read -r var val 00:07:02.250 16:26:39 -- accel/accel.sh@21 -- # val= 00:07:02.250 16:26:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.250 16:26:39 -- accel/accel.sh@20 -- # IFS=: 00:07:02.250 16:26:39 -- accel/accel.sh@20 -- # read -r var val 00:07:02.250 16:26:39 -- accel/accel.sh@21 -- # val= 00:07:02.250 16:26:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.250 16:26:39 -- accel/accel.sh@20 -- # IFS=: 00:07:02.250 16:26:39 -- accel/accel.sh@20 -- # read -r var val 00:07:02.250 16:26:39 -- accel/accel.sh@21 -- # val= 00:07:02.250 16:26:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.250 16:26:39 -- accel/accel.sh@20 -- # IFS=: 00:07:02.250 16:26:39 -- accel/accel.sh@20 -- # read -r var val 00:07:02.250 16:26:39 -- accel/accel.sh@21 -- # val= 00:07:02.250 16:26:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.250 16:26:39 -- accel/accel.sh@20 -- # IFS=: 00:07:02.250 16:26:39 -- accel/accel.sh@20 -- # read -r var val 00:07:02.250 16:26:39 -- accel/accel.sh@21 -- # val= 00:07:02.250 16:26:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.250 16:26:39 -- accel/accel.sh@20 -- # IFS=: 00:07:02.250 16:26:39 -- accel/accel.sh@20 -- # read -r var val 00:07:02.250 ************************************ 00:07:02.250 END TEST accel_comp 00:07:02.250 ************************************ 00:07:02.250 16:26:39 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:02.250 16:26:39 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:07:02.250 16:26:39 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:02.250 00:07:02.250 real 0m2.815s 00:07:02.250 user 0m2.392s 00:07:02.250 sys 0m0.223s 00:07:02.250 16:26:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:02.250 16:26:39 -- common/autotest_common.sh@10 -- # set +x 00:07:02.250 16:26:39 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:02.250 16:26:39 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:02.250 16:26:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:02.250 16:26:39 -- common/autotest_common.sh@10 -- # set +x 00:07:02.250 ************************************ 00:07:02.250 START TEST accel_decomp 00:07:02.250 ************************************ 00:07:02.250 16:26:39 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:02.250 16:26:39 -- accel/accel.sh@16 -- # local accel_opc 00:07:02.250 16:26:39 -- accel/accel.sh@17 -- # local accel_module 00:07:02.250 16:26:39 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:02.250 16:26:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:02.250 16:26:39 -- accel/accel.sh@12 -- # build_accel_config 00:07:02.250 16:26:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:02.250 16:26:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.250 16:26:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.250 16:26:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:02.250 16:26:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:02.250 16:26:39 -- accel/accel.sh@41 -- # local IFS=, 00:07:02.250 16:26:39 -- accel/accel.sh@42 -- # jq -r . 00:07:02.250 [2024-11-16 16:26:39.491029] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:02.250 [2024-11-16 16:26:39.491248] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71249 ] 00:07:02.250 [2024-11-16 16:26:39.628448] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.250 [2024-11-16 16:26:39.686593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.628 16:26:40 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:03.628 00:07:03.628 SPDK Configuration: 00:07:03.628 Core mask: 0x1 00:07:03.628 00:07:03.628 Accel Perf Configuration: 00:07:03.628 Workload Type: decompress 00:07:03.628 Transfer size: 4096 bytes 00:07:03.628 Vector count 1 00:07:03.628 Module: software 00:07:03.628 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:03.628 Queue depth: 32 00:07:03.628 Allocate depth: 32 00:07:03.628 # threads/core: 1 00:07:03.628 Run time: 1 seconds 00:07:03.628 Verify: Yes 00:07:03.628 00:07:03.628 Running for 1 seconds... 00:07:03.628 00:07:03.628 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:03.628 ------------------------------------------------------------------------------------ 00:07:03.628 0,0 83840/s 154 MiB/s 0 0 00:07:03.628 ==================================================================================== 00:07:03.628 Total 83840/s 327 MiB/s 0 0' 00:07:03.628 16:26:40 -- accel/accel.sh@20 -- # IFS=: 00:07:03.628 16:26:40 -- accel/accel.sh@20 -- # read -r var val 00:07:03.628 16:26:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:03.628 16:26:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:03.628 16:26:40 -- accel/accel.sh@12 -- # build_accel_config 00:07:03.628 16:26:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:03.628 16:26:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.628 16:26:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.628 16:26:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:03.628 16:26:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:03.628 16:26:40 -- accel/accel.sh@41 -- # local IFS=, 00:07:03.628 16:26:40 -- accel/accel.sh@42 -- # jq -r . 00:07:03.628 [2024-11-16 16:26:40.898102] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:03.628 [2024-11-16 16:26:40.898385] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71266 ] 00:07:03.628 [2024-11-16 16:26:41.031387] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.628 [2024-11-16 16:26:41.086450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.888 16:26:41 -- accel/accel.sh@21 -- # val= 00:07:03.888 16:26:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.888 16:26:41 -- accel/accel.sh@20 -- # IFS=: 00:07:03.888 16:26:41 -- accel/accel.sh@20 -- # read -r var val 00:07:03.888 16:26:41 -- accel/accel.sh@21 -- # val= 00:07:03.888 16:26:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.888 16:26:41 -- accel/accel.sh@20 -- # IFS=: 00:07:03.888 16:26:41 -- accel/accel.sh@20 -- # read -r var val 00:07:03.888 16:26:41 -- accel/accel.sh@21 -- # val= 00:07:03.888 16:26:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.888 16:26:41 -- accel/accel.sh@20 -- # IFS=: 00:07:03.888 16:26:41 -- accel/accel.sh@20 -- # read -r var val 00:07:03.888 16:26:41 -- accel/accel.sh@21 -- # val=0x1 00:07:03.888 16:26:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.888 16:26:41 -- accel/accel.sh@20 -- # IFS=: 00:07:03.888 16:26:41 -- accel/accel.sh@20 -- # read -r var val 00:07:03.888 16:26:41 -- accel/accel.sh@21 -- # val= 00:07:03.888 16:26:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.888 16:26:41 -- accel/accel.sh@20 -- # IFS=: 00:07:03.888 16:26:41 -- accel/accel.sh@20 -- # read -r var val 00:07:03.888 16:26:41 -- accel/accel.sh@21 -- # val= 00:07:03.888 16:26:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.888 16:26:41 -- accel/accel.sh@20 -- # IFS=: 00:07:03.888 16:26:41 -- accel/accel.sh@20 -- # read -r var val 00:07:03.888 16:26:41 -- accel/accel.sh@21 -- # val=decompress 00:07:03.888 16:26:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.888 16:26:41 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:03.888 16:26:41 -- accel/accel.sh@20 -- # IFS=: 00:07:03.888 16:26:41 -- accel/accel.sh@20 -- # read -r var val 00:07:03.888 16:26:41 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:03.888 16:26:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.888 16:26:41 -- accel/accel.sh@20 -- # IFS=: 00:07:03.888 16:26:41 -- accel/accel.sh@20 -- # read -r var val 00:07:03.888 16:26:41 -- accel/accel.sh@21 -- # val= 00:07:03.888 16:26:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.888 16:26:41 -- accel/accel.sh@20 -- # IFS=: 00:07:03.888 16:26:41 -- accel/accel.sh@20 -- # read -r var val 00:07:03.888 16:26:41 -- accel/accel.sh@21 -- # val=software 00:07:03.888 16:26:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.888 16:26:41 -- accel/accel.sh@23 -- # accel_module=software 00:07:03.888 16:26:41 -- accel/accel.sh@20 -- # IFS=: 00:07:03.888 16:26:41 -- accel/accel.sh@20 -- # read -r var val 00:07:03.888 16:26:41 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:03.888 16:26:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.888 16:26:41 -- accel/accel.sh@20 -- # IFS=: 00:07:03.888 16:26:41 -- accel/accel.sh@20 -- # read -r var val 00:07:03.888 16:26:41 -- accel/accel.sh@21 -- # val=32 00:07:03.888 16:26:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.888 16:26:41 -- accel/accel.sh@20 -- # IFS=: 00:07:03.888 16:26:41 -- accel/accel.sh@20 -- # read -r var val 00:07:03.888 16:26:41 -- accel/accel.sh@21 -- # val=32 00:07:03.888 16:26:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.888 16:26:41 -- accel/accel.sh@20 -- # IFS=: 00:07:03.888 16:26:41 -- accel/accel.sh@20 -- # read -r var val 00:07:03.888 16:26:41 -- accel/accel.sh@21 -- # val=1 00:07:03.888 16:26:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.888 16:26:41 -- accel/accel.sh@20 -- # IFS=: 00:07:03.888 16:26:41 -- accel/accel.sh@20 -- # read -r var val 00:07:03.888 16:26:41 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:03.888 16:26:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.888 16:26:41 -- accel/accel.sh@20 -- # IFS=: 00:07:03.888 16:26:41 -- accel/accel.sh@20 -- # read -r var val 00:07:03.888 16:26:41 -- accel/accel.sh@21 -- # val=Yes 00:07:03.888 16:26:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.888 16:26:41 -- accel/accel.sh@20 -- # IFS=: 00:07:03.888 16:26:41 -- accel/accel.sh@20 -- # read -r var val 00:07:03.888 16:26:41 -- accel/accel.sh@21 -- # val= 00:07:03.888 16:26:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.888 16:26:41 -- accel/accel.sh@20 -- # IFS=: 00:07:03.888 16:26:41 -- accel/accel.sh@20 -- # read -r var val 00:07:03.888 16:26:41 -- accel/accel.sh@21 -- # val= 00:07:03.888 16:26:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.888 16:26:41 -- accel/accel.sh@20 -- # IFS=: 00:07:03.888 16:26:41 -- accel/accel.sh@20 -- # read -r var val 00:07:04.825 16:26:42 -- accel/accel.sh@21 -- # val= 00:07:04.825 16:26:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.825 16:26:42 -- accel/accel.sh@20 -- # IFS=: 00:07:04.825 16:26:42 -- accel/accel.sh@20 -- # read -r var val 00:07:04.825 16:26:42 -- accel/accel.sh@21 -- # val= 00:07:04.825 16:26:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.825 16:26:42 -- accel/accel.sh@20 -- # IFS=: 00:07:04.825 16:26:42 -- accel/accel.sh@20 -- # read -r var val 00:07:04.825 16:26:42 -- accel/accel.sh@21 -- # val= 00:07:04.825 16:26:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.825 16:26:42 -- accel/accel.sh@20 -- # IFS=: 00:07:04.825 16:26:42 -- accel/accel.sh@20 -- # read -r var val 00:07:04.825 16:26:42 -- accel/accel.sh@21 -- # val= 00:07:04.825 16:26:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.825 16:26:42 -- accel/accel.sh@20 -- # IFS=: 00:07:04.825 16:26:42 -- accel/accel.sh@20 -- # read -r var val 00:07:04.825 16:26:42 -- accel/accel.sh@21 -- # val= 00:07:04.825 16:26:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.825 16:26:42 -- accel/accel.sh@20 -- # IFS=: 00:07:04.825 16:26:42 -- accel/accel.sh@20 -- # read -r var val 00:07:04.825 ************************************ 00:07:04.825 END TEST accel_decomp 00:07:04.825 ************************************ 00:07:04.825 16:26:42 -- accel/accel.sh@21 -- # val= 00:07:04.825 16:26:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.825 16:26:42 -- accel/accel.sh@20 -- # IFS=: 00:07:04.825 16:26:42 -- accel/accel.sh@20 -- # read -r var val 00:07:04.825 16:26:42 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:04.825 16:26:42 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:04.825 16:26:42 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:04.825 00:07:04.825 real 0m2.807s 00:07:04.825 user 0m2.385s 00:07:04.825 sys 0m0.221s 00:07:04.825 16:26:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:04.825 16:26:42 -- common/autotest_common.sh@10 -- # set +x 00:07:05.085 16:26:42 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:05.085 16:26:42 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:05.085 16:26:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:05.085 16:26:42 -- common/autotest_common.sh@10 -- # set +x 00:07:05.085 ************************************ 00:07:05.085 START TEST accel_decmop_full 00:07:05.085 ************************************ 00:07:05.085 16:26:42 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:05.085 16:26:42 -- accel/accel.sh@16 -- # local accel_opc 00:07:05.085 16:26:42 -- accel/accel.sh@17 -- # local accel_module 00:07:05.085 16:26:42 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:05.085 16:26:42 -- accel/accel.sh@12 -- # build_accel_config 00:07:05.085 16:26:42 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:05.085 16:26:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:05.085 16:26:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.085 16:26:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.085 16:26:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:05.085 16:26:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:05.085 16:26:42 -- accel/accel.sh@41 -- # local IFS=, 00:07:05.085 16:26:42 -- accel/accel.sh@42 -- # jq -r . 00:07:05.085 [2024-11-16 16:26:42.351520] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:05.085 [2024-11-16 16:26:42.351617] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71295 ] 00:07:05.085 [2024-11-16 16:26:42.487414] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.085 [2024-11-16 16:26:42.539670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.464 16:26:43 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:06.464 00:07:06.464 SPDK Configuration: 00:07:06.464 Core mask: 0x1 00:07:06.464 00:07:06.464 Accel Perf Configuration: 00:07:06.464 Workload Type: decompress 00:07:06.464 Transfer size: 111250 bytes 00:07:06.464 Vector count 1 00:07:06.464 Module: software 00:07:06.464 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:06.464 Queue depth: 32 00:07:06.464 Allocate depth: 32 00:07:06.464 # threads/core: 1 00:07:06.464 Run time: 1 seconds 00:07:06.464 Verify: Yes 00:07:06.464 00:07:06.464 Running for 1 seconds... 00:07:06.464 00:07:06.464 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:06.464 ------------------------------------------------------------------------------------ 00:07:06.464 0,0 5600/s 231 MiB/s 0 0 00:07:06.464 ==================================================================================== 00:07:06.464 Total 5600/s 594 MiB/s 0 0' 00:07:06.464 16:26:43 -- accel/accel.sh@20 -- # IFS=: 00:07:06.464 16:26:43 -- accel/accel.sh@20 -- # read -r var val 00:07:06.464 16:26:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:06.464 16:26:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:06.464 16:26:43 -- accel/accel.sh@12 -- # build_accel_config 00:07:06.464 16:26:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:06.464 16:26:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.464 16:26:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.464 16:26:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:06.464 16:26:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:06.464 16:26:43 -- accel/accel.sh@41 -- # local IFS=, 00:07:06.464 16:26:43 -- accel/accel.sh@42 -- # jq -r . 00:07:06.464 [2024-11-16 16:26:43.779787] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:06.464 [2024-11-16 16:26:43.779884] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71320 ] 00:07:06.464 [2024-11-16 16:26:43.915689] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.725 [2024-11-16 16:26:43.967449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.725 16:26:44 -- accel/accel.sh@21 -- # val= 00:07:06.725 16:26:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.725 16:26:44 -- accel/accel.sh@20 -- # IFS=: 00:07:06.725 16:26:44 -- accel/accel.sh@20 -- # read -r var val 00:07:06.725 16:26:44 -- accel/accel.sh@21 -- # val= 00:07:06.725 16:26:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.725 16:26:44 -- accel/accel.sh@20 -- # IFS=: 00:07:06.725 16:26:44 -- accel/accel.sh@20 -- # read -r var val 00:07:06.725 16:26:44 -- accel/accel.sh@21 -- # val= 00:07:06.725 16:26:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.725 16:26:44 -- accel/accel.sh@20 -- # IFS=: 00:07:06.725 16:26:44 -- accel/accel.sh@20 -- # read -r var val 00:07:06.725 16:26:44 -- accel/accel.sh@21 -- # val=0x1 00:07:06.725 16:26:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.725 16:26:44 -- accel/accel.sh@20 -- # IFS=: 00:07:06.725 16:26:44 -- accel/accel.sh@20 -- # read -r var val 00:07:06.725 16:26:44 -- accel/accel.sh@21 -- # val= 00:07:06.725 16:26:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.725 16:26:44 -- accel/accel.sh@20 -- # IFS=: 00:07:06.725 16:26:44 -- accel/accel.sh@20 -- # read -r var val 00:07:06.725 16:26:44 -- accel/accel.sh@21 -- # val= 00:07:06.725 16:26:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.725 16:26:44 -- accel/accel.sh@20 -- # IFS=: 00:07:06.725 16:26:44 -- accel/accel.sh@20 -- # read -r var val 00:07:06.725 16:26:44 -- accel/accel.sh@21 -- # val=decompress 00:07:06.725 16:26:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.725 16:26:44 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:06.725 16:26:44 -- accel/accel.sh@20 -- # IFS=: 00:07:06.725 16:26:44 -- accel/accel.sh@20 -- # read -r var val 00:07:06.725 16:26:44 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:06.725 16:26:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.725 16:26:44 -- accel/accel.sh@20 -- # IFS=: 00:07:06.725 16:26:44 -- accel/accel.sh@20 -- # read -r var val 00:07:06.725 16:26:44 -- accel/accel.sh@21 -- # val= 00:07:06.725 16:26:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.725 16:26:44 -- accel/accel.sh@20 -- # IFS=: 00:07:06.725 16:26:44 -- accel/accel.sh@20 -- # read -r var val 00:07:06.725 16:26:44 -- accel/accel.sh@21 -- # val=software 00:07:06.725 16:26:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.725 16:26:44 -- accel/accel.sh@23 -- # accel_module=software 00:07:06.725 16:26:44 -- accel/accel.sh@20 -- # IFS=: 00:07:06.725 16:26:44 -- accel/accel.sh@20 -- # read -r var val 00:07:06.725 16:26:44 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:06.725 16:26:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.725 16:26:44 -- accel/accel.sh@20 -- # IFS=: 00:07:06.725 16:26:44 -- accel/accel.sh@20 -- # read -r var val 00:07:06.725 16:26:44 -- accel/accel.sh@21 -- # val=32 00:07:06.725 16:26:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.725 16:26:44 -- accel/accel.sh@20 -- # IFS=: 00:07:06.725 16:26:44 -- accel/accel.sh@20 -- # read -r var val 00:07:06.725 16:26:44 -- accel/accel.sh@21 -- # val=32 00:07:06.725 16:26:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.725 16:26:44 -- accel/accel.sh@20 -- # IFS=: 00:07:06.725 16:26:44 -- accel/accel.sh@20 -- # read -r var val 00:07:06.725 16:26:44 -- accel/accel.sh@21 -- # val=1 00:07:06.725 16:26:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.725 16:26:44 -- accel/accel.sh@20 -- # IFS=: 00:07:06.725 16:26:44 -- accel/accel.sh@20 -- # read -r var val 00:07:06.725 16:26:44 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:06.725 16:26:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.725 16:26:44 -- accel/accel.sh@20 -- # IFS=: 00:07:06.725 16:26:44 -- accel/accel.sh@20 -- # read -r var val 00:07:06.725 16:26:44 -- accel/accel.sh@21 -- # val=Yes 00:07:06.725 16:26:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.725 16:26:44 -- accel/accel.sh@20 -- # IFS=: 00:07:06.725 16:26:44 -- accel/accel.sh@20 -- # read -r var val 00:07:06.725 16:26:44 -- accel/accel.sh@21 -- # val= 00:07:06.725 16:26:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.725 16:26:44 -- accel/accel.sh@20 -- # IFS=: 00:07:06.725 16:26:44 -- accel/accel.sh@20 -- # read -r var val 00:07:06.725 16:26:44 -- accel/accel.sh@21 -- # val= 00:07:06.725 16:26:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.725 16:26:44 -- accel/accel.sh@20 -- # IFS=: 00:07:06.725 16:26:44 -- accel/accel.sh@20 -- # read -r var val 00:07:08.159 16:26:45 -- accel/accel.sh@21 -- # val= 00:07:08.159 16:26:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.159 16:26:45 -- accel/accel.sh@20 -- # IFS=: 00:07:08.159 16:26:45 -- accel/accel.sh@20 -- # read -r var val 00:07:08.159 16:26:45 -- accel/accel.sh@21 -- # val= 00:07:08.159 16:26:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.159 16:26:45 -- accel/accel.sh@20 -- # IFS=: 00:07:08.159 16:26:45 -- accel/accel.sh@20 -- # read -r var val 00:07:08.159 16:26:45 -- accel/accel.sh@21 -- # val= 00:07:08.159 16:26:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.159 16:26:45 -- accel/accel.sh@20 -- # IFS=: 00:07:08.159 16:26:45 -- accel/accel.sh@20 -- # read -r var val 00:07:08.159 16:26:45 -- accel/accel.sh@21 -- # val= 00:07:08.159 16:26:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.159 16:26:45 -- accel/accel.sh@20 -- # IFS=: 00:07:08.159 16:26:45 -- accel/accel.sh@20 -- # read -r var val 00:07:08.159 16:26:45 -- accel/accel.sh@21 -- # val= 00:07:08.159 16:26:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.159 16:26:45 -- accel/accel.sh@20 -- # IFS=: 00:07:08.159 16:26:45 -- accel/accel.sh@20 -- # read -r var val 00:07:08.159 16:26:45 -- accel/accel.sh@21 -- # val= 00:07:08.159 16:26:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.159 16:26:45 -- accel/accel.sh@20 -- # IFS=: 00:07:08.159 16:26:45 -- accel/accel.sh@20 -- # read -r var val 00:07:08.159 16:26:45 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:08.159 16:26:45 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:08.159 16:26:45 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:08.159 00:07:08.159 real 0m2.851s 00:07:08.159 user 0m2.432s 00:07:08.159 sys 0m0.216s 00:07:08.159 16:26:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:08.159 ************************************ 00:07:08.159 END TEST accel_decmop_full 00:07:08.159 16:26:45 -- common/autotest_common.sh@10 -- # set +x 00:07:08.159 ************************************ 00:07:08.159 16:26:45 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:08.159 16:26:45 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:08.159 16:26:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:08.159 16:26:45 -- common/autotest_common.sh@10 -- # set +x 00:07:08.159 ************************************ 00:07:08.159 START TEST accel_decomp_mcore 00:07:08.159 ************************************ 00:07:08.159 16:26:45 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:08.159 16:26:45 -- accel/accel.sh@16 -- # local accel_opc 00:07:08.160 16:26:45 -- accel/accel.sh@17 -- # local accel_module 00:07:08.160 16:26:45 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:08.160 16:26:45 -- accel/accel.sh@12 -- # build_accel_config 00:07:08.160 16:26:45 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:08.160 16:26:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:08.160 16:26:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.160 16:26:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.160 16:26:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:08.160 16:26:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:08.160 16:26:45 -- accel/accel.sh@41 -- # local IFS=, 00:07:08.160 16:26:45 -- accel/accel.sh@42 -- # jq -r . 00:07:08.160 [2024-11-16 16:26:45.258857] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:08.160 [2024-11-16 16:26:45.258958] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71349 ] 00:07:08.160 [2024-11-16 16:26:45.396446] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:08.160 [2024-11-16 16:26:45.466343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:08.160 [2024-11-16 16:26:45.466482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:08.160 [2024-11-16 16:26:45.466604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:08.160 [2024-11-16 16:26:45.466874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.549 16:26:46 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:09.549 00:07:09.549 SPDK Configuration: 00:07:09.549 Core mask: 0xf 00:07:09.549 00:07:09.549 Accel Perf Configuration: 00:07:09.549 Workload Type: decompress 00:07:09.549 Transfer size: 4096 bytes 00:07:09.549 Vector count 1 00:07:09.549 Module: software 00:07:09.549 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:09.549 Queue depth: 32 00:07:09.549 Allocate depth: 32 00:07:09.549 # threads/core: 1 00:07:09.549 Run time: 1 seconds 00:07:09.549 Verify: Yes 00:07:09.549 00:07:09.549 Running for 1 seconds... 00:07:09.549 00:07:09.549 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:09.549 ------------------------------------------------------------------------------------ 00:07:09.549 0,0 59680/s 109 MiB/s 0 0 00:07:09.549 3,0 56256/s 103 MiB/s 0 0 00:07:09.549 2,0 57024/s 105 MiB/s 0 0 00:07:09.549 1,0 57728/s 106 MiB/s 0 0 00:07:09.549 ==================================================================================== 00:07:09.549 Total 230688/s 901 MiB/s 0 0' 00:07:09.549 16:26:46 -- accel/accel.sh@20 -- # IFS=: 00:07:09.549 16:26:46 -- accel/accel.sh@20 -- # read -r var val 00:07:09.549 16:26:46 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:09.549 16:26:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:09.549 16:26:46 -- accel/accel.sh@12 -- # build_accel_config 00:07:09.549 16:26:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:09.549 16:26:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.549 16:26:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.549 16:26:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:09.549 16:26:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:09.549 16:26:46 -- accel/accel.sh@41 -- # local IFS=, 00:07:09.549 16:26:46 -- accel/accel.sh@42 -- # jq -r . 00:07:09.549 [2024-11-16 16:26:46.744151] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:09.549 [2024-11-16 16:26:46.744249] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71372 ] 00:07:09.549 [2024-11-16 16:26:46.881747] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:09.549 [2024-11-16 16:26:46.942925] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:09.549 [2024-11-16 16:26:46.943101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:09.549 [2024-11-16 16:26:46.943587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:09.549 [2024-11-16 16:26:46.943896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.549 16:26:47 -- accel/accel.sh@21 -- # val= 00:07:09.549 16:26:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.549 16:26:47 -- accel/accel.sh@20 -- # IFS=: 00:07:09.549 16:26:47 -- accel/accel.sh@20 -- # read -r var val 00:07:09.549 16:26:47 -- accel/accel.sh@21 -- # val= 00:07:09.549 16:26:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.549 16:26:47 -- accel/accel.sh@20 -- # IFS=: 00:07:09.549 16:26:47 -- accel/accel.sh@20 -- # read -r var val 00:07:09.549 16:26:47 -- accel/accel.sh@21 -- # val= 00:07:09.549 16:26:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.549 16:26:47 -- accel/accel.sh@20 -- # IFS=: 00:07:09.549 16:26:47 -- accel/accel.sh@20 -- # read -r var val 00:07:09.549 16:26:47 -- accel/accel.sh@21 -- # val=0xf 00:07:09.549 16:26:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.549 16:26:47 -- accel/accel.sh@20 -- # IFS=: 00:07:09.549 16:26:47 -- accel/accel.sh@20 -- # read -r var val 00:07:09.549 16:26:47 -- accel/accel.sh@21 -- # val= 00:07:09.549 16:26:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.549 16:26:47 -- accel/accel.sh@20 -- # IFS=: 00:07:09.549 16:26:47 -- accel/accel.sh@20 -- # read -r var val 00:07:09.549 16:26:47 -- accel/accel.sh@21 -- # val= 00:07:09.549 16:26:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.549 16:26:47 -- accel/accel.sh@20 -- # IFS=: 00:07:09.549 16:26:47 -- accel/accel.sh@20 -- # read -r var val 00:07:09.549 16:26:47 -- accel/accel.sh@21 -- # val=decompress 00:07:09.549 16:26:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.549 16:26:47 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:09.549 16:26:47 -- accel/accel.sh@20 -- # IFS=: 00:07:09.549 16:26:47 -- accel/accel.sh@20 -- # read -r var val 00:07:09.549 16:26:47 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:09.549 16:26:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.549 16:26:47 -- accel/accel.sh@20 -- # IFS=: 00:07:09.549 16:26:47 -- accel/accel.sh@20 -- # read -r var val 00:07:09.549 16:26:47 -- accel/accel.sh@21 -- # val= 00:07:09.549 16:26:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.549 16:26:47 -- accel/accel.sh@20 -- # IFS=: 00:07:09.549 16:26:47 -- accel/accel.sh@20 -- # read -r var val 00:07:09.549 16:26:47 -- accel/accel.sh@21 -- # val=software 00:07:09.549 16:26:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.549 16:26:47 -- accel/accel.sh@23 -- # accel_module=software 00:07:09.549 16:26:47 -- accel/accel.sh@20 -- # IFS=: 00:07:09.549 16:26:47 -- accel/accel.sh@20 -- # read -r var val 00:07:09.549 16:26:47 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:09.549 16:26:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.549 16:26:47 -- accel/accel.sh@20 -- # IFS=: 00:07:09.549 16:26:47 -- accel/accel.sh@20 -- # read -r var val 00:07:09.549 16:26:47 -- accel/accel.sh@21 -- # val=32 00:07:09.549 16:26:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.549 16:26:47 -- accel/accel.sh@20 -- # IFS=: 00:07:09.549 16:26:47 -- accel/accel.sh@20 -- # read -r var val 00:07:09.549 16:26:47 -- accel/accel.sh@21 -- # val=32 00:07:09.549 16:26:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.549 16:26:47 -- accel/accel.sh@20 -- # IFS=: 00:07:09.549 16:26:47 -- accel/accel.sh@20 -- # read -r var val 00:07:09.549 16:26:47 -- accel/accel.sh@21 -- # val=1 00:07:09.549 16:26:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.549 16:26:47 -- accel/accel.sh@20 -- # IFS=: 00:07:09.549 16:26:47 -- accel/accel.sh@20 -- # read -r var val 00:07:09.549 16:26:47 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:09.549 16:26:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.549 16:26:47 -- accel/accel.sh@20 -- # IFS=: 00:07:09.549 16:26:47 -- accel/accel.sh@20 -- # read -r var val 00:07:09.549 16:26:47 -- accel/accel.sh@21 -- # val=Yes 00:07:09.549 16:26:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.549 16:26:47 -- accel/accel.sh@20 -- # IFS=: 00:07:09.549 16:26:47 -- accel/accel.sh@20 -- # read -r var val 00:07:09.549 16:26:47 -- accel/accel.sh@21 -- # val= 00:07:09.549 16:26:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.549 16:26:47 -- accel/accel.sh@20 -- # IFS=: 00:07:09.549 16:26:47 -- accel/accel.sh@20 -- # read -r var val 00:07:09.549 16:26:47 -- accel/accel.sh@21 -- # val= 00:07:09.549 16:26:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.549 16:26:47 -- accel/accel.sh@20 -- # IFS=: 00:07:09.549 16:26:47 -- accel/accel.sh@20 -- # read -r var val 00:07:10.926 16:26:48 -- accel/accel.sh@21 -- # val= 00:07:10.926 16:26:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.926 16:26:48 -- accel/accel.sh@20 -- # IFS=: 00:07:10.926 16:26:48 -- accel/accel.sh@20 -- # read -r var val 00:07:10.926 16:26:48 -- accel/accel.sh@21 -- # val= 00:07:10.926 16:26:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.926 16:26:48 -- accel/accel.sh@20 -- # IFS=: 00:07:10.926 16:26:48 -- accel/accel.sh@20 -- # read -r var val 00:07:10.926 16:26:48 -- accel/accel.sh@21 -- # val= 00:07:10.926 16:26:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.926 16:26:48 -- accel/accel.sh@20 -- # IFS=: 00:07:10.926 16:26:48 -- accel/accel.sh@20 -- # read -r var val 00:07:10.926 16:26:48 -- accel/accel.sh@21 -- # val= 00:07:10.926 16:26:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.926 16:26:48 -- accel/accel.sh@20 -- # IFS=: 00:07:10.926 16:26:48 -- accel/accel.sh@20 -- # read -r var val 00:07:10.926 16:26:48 -- accel/accel.sh@21 -- # val= 00:07:10.926 16:26:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.926 16:26:48 -- accel/accel.sh@20 -- # IFS=: 00:07:10.926 16:26:48 -- accel/accel.sh@20 -- # read -r var val 00:07:10.926 16:26:48 -- accel/accel.sh@21 -- # val= 00:07:10.926 16:26:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.926 16:26:48 -- accel/accel.sh@20 -- # IFS=: 00:07:10.926 16:26:48 -- accel/accel.sh@20 -- # read -r var val 00:07:10.926 16:26:48 -- accel/accel.sh@21 -- # val= 00:07:10.926 16:26:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.926 16:26:48 -- accel/accel.sh@20 -- # IFS=: 00:07:10.926 16:26:48 -- accel/accel.sh@20 -- # read -r var val 00:07:10.926 16:26:48 -- accel/accel.sh@21 -- # val= 00:07:10.926 16:26:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.926 16:26:48 -- accel/accel.sh@20 -- # IFS=: 00:07:10.926 16:26:48 -- accel/accel.sh@20 -- # read -r var val 00:07:10.926 16:26:48 -- accel/accel.sh@21 -- # val= 00:07:10.926 16:26:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.926 16:26:48 -- accel/accel.sh@20 -- # IFS=: 00:07:10.926 16:26:48 -- accel/accel.sh@20 -- # read -r var val 00:07:10.926 16:26:48 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:10.926 16:26:48 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:10.926 16:26:48 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:10.926 00:07:10.926 real 0m3.013s 00:07:10.926 user 0m4.755s 00:07:10.926 sys 0m0.128s 00:07:10.926 16:26:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:10.926 ************************************ 00:07:10.926 END TEST accel_decomp_mcore 00:07:10.926 ************************************ 00:07:10.926 16:26:48 -- common/autotest_common.sh@10 -- # set +x 00:07:10.926 16:26:48 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:10.926 16:26:48 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:10.926 16:26:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:10.926 16:26:48 -- common/autotest_common.sh@10 -- # set +x 00:07:10.926 ************************************ 00:07:10.926 START TEST accel_decomp_full_mcore 00:07:10.926 ************************************ 00:07:10.926 16:26:48 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:10.926 16:26:48 -- accel/accel.sh@16 -- # local accel_opc 00:07:10.926 16:26:48 -- accel/accel.sh@17 -- # local accel_module 00:07:10.926 16:26:48 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:10.926 16:26:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:10.926 16:26:48 -- accel/accel.sh@12 -- # build_accel_config 00:07:10.926 16:26:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:10.926 16:26:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.926 16:26:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.926 16:26:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:10.926 16:26:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:10.926 16:26:48 -- accel/accel.sh@41 -- # local IFS=, 00:07:10.926 16:26:48 -- accel/accel.sh@42 -- # jq -r . 00:07:10.926 [2024-11-16 16:26:48.325827] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:10.926 [2024-11-16 16:26:48.325930] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71409 ] 00:07:11.184 [2024-11-16 16:26:48.465264] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:11.184 [2024-11-16 16:26:48.538211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:11.184 [2024-11-16 16:26:48.538376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:11.184 [2024-11-16 16:26:48.538513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:11.185 [2024-11-16 16:26:48.538758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.560 16:26:49 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:12.561 00:07:12.561 SPDK Configuration: 00:07:12.561 Core mask: 0xf 00:07:12.561 00:07:12.561 Accel Perf Configuration: 00:07:12.561 Workload Type: decompress 00:07:12.561 Transfer size: 111250 bytes 00:07:12.561 Vector count 1 00:07:12.561 Module: software 00:07:12.561 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:12.561 Queue depth: 32 00:07:12.561 Allocate depth: 32 00:07:12.561 # threads/core: 1 00:07:12.561 Run time: 1 seconds 00:07:12.561 Verify: Yes 00:07:12.561 00:07:12.561 Running for 1 seconds... 00:07:12.561 00:07:12.561 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:12.561 ------------------------------------------------------------------------------------ 00:07:12.561 0,0 5536/s 228 MiB/s 0 0 00:07:12.561 3,0 5344/s 220 MiB/s 0 0 00:07:12.561 2,0 5248/s 216 MiB/s 0 0 00:07:12.561 1,0 5376/s 222 MiB/s 0 0 00:07:12.561 ==================================================================================== 00:07:12.561 Total 21504/s 2281 MiB/s 0 0' 00:07:12.561 16:26:49 -- accel/accel.sh@20 -- # IFS=: 00:07:12.561 16:26:49 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:12.561 16:26:49 -- accel/accel.sh@20 -- # read -r var val 00:07:12.561 16:26:49 -- accel/accel.sh@12 -- # build_accel_config 00:07:12.561 16:26:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:12.561 16:26:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:12.561 16:26:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.561 16:26:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.561 16:26:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:12.561 16:26:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:12.561 16:26:49 -- accel/accel.sh@41 -- # local IFS=, 00:07:12.561 16:26:49 -- accel/accel.sh@42 -- # jq -r . 00:07:12.561 [2024-11-16 16:26:49.870971] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:12.561 [2024-11-16 16:26:49.871243] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71432 ] 00:07:12.561 [2024-11-16 16:26:50.010493] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:12.819 [2024-11-16 16:26:50.103918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:12.819 [2024-11-16 16:26:50.104226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:12.819 [2024-11-16 16:26:50.104231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.819 [2024-11-16 16:26:50.104100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:12.819 16:26:50 -- accel/accel.sh@21 -- # val= 00:07:12.819 16:26:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.819 16:26:50 -- accel/accel.sh@20 -- # IFS=: 00:07:12.819 16:26:50 -- accel/accel.sh@20 -- # read -r var val 00:07:12.819 16:26:50 -- accel/accel.sh@21 -- # val= 00:07:12.819 16:26:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.819 16:26:50 -- accel/accel.sh@20 -- # IFS=: 00:07:12.819 16:26:50 -- accel/accel.sh@20 -- # read -r var val 00:07:12.819 16:26:50 -- accel/accel.sh@21 -- # val= 00:07:12.819 16:26:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.819 16:26:50 -- accel/accel.sh@20 -- # IFS=: 00:07:12.819 16:26:50 -- accel/accel.sh@20 -- # read -r var val 00:07:12.819 16:26:50 -- accel/accel.sh@21 -- # val=0xf 00:07:12.819 16:26:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.819 16:26:50 -- accel/accel.sh@20 -- # IFS=: 00:07:12.819 16:26:50 -- accel/accel.sh@20 -- # read -r var val 00:07:12.819 16:26:50 -- accel/accel.sh@21 -- # val= 00:07:12.819 16:26:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.819 16:26:50 -- accel/accel.sh@20 -- # IFS=: 00:07:12.819 16:26:50 -- accel/accel.sh@20 -- # read -r var val 00:07:12.819 16:26:50 -- accel/accel.sh@21 -- # val= 00:07:12.819 16:26:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.819 16:26:50 -- accel/accel.sh@20 -- # IFS=: 00:07:12.819 16:26:50 -- accel/accel.sh@20 -- # read -r var val 00:07:12.819 16:26:50 -- accel/accel.sh@21 -- # val=decompress 00:07:12.819 16:26:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.819 16:26:50 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:12.819 16:26:50 -- accel/accel.sh@20 -- # IFS=: 00:07:12.819 16:26:50 -- accel/accel.sh@20 -- # read -r var val 00:07:12.819 16:26:50 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:12.819 16:26:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.819 16:26:50 -- accel/accel.sh@20 -- # IFS=: 00:07:12.819 16:26:50 -- accel/accel.sh@20 -- # read -r var val 00:07:12.819 16:26:50 -- accel/accel.sh@21 -- # val= 00:07:12.819 16:26:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.819 16:26:50 -- accel/accel.sh@20 -- # IFS=: 00:07:12.819 16:26:50 -- accel/accel.sh@20 -- # read -r var val 00:07:12.819 16:26:50 -- accel/accel.sh@21 -- # val=software 00:07:12.819 16:26:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.819 16:26:50 -- accel/accel.sh@23 -- # accel_module=software 00:07:12.819 16:26:50 -- accel/accel.sh@20 -- # IFS=: 00:07:12.819 16:26:50 -- accel/accel.sh@20 -- # read -r var val 00:07:12.819 16:26:50 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:12.819 16:26:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.819 16:26:50 -- accel/accel.sh@20 -- # IFS=: 00:07:12.819 16:26:50 -- accel/accel.sh@20 -- # read -r var val 00:07:12.819 16:26:50 -- accel/accel.sh@21 -- # val=32 00:07:12.819 16:26:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.819 16:26:50 -- accel/accel.sh@20 -- # IFS=: 00:07:12.819 16:26:50 -- accel/accel.sh@20 -- # read -r var val 00:07:12.819 16:26:50 -- accel/accel.sh@21 -- # val=32 00:07:12.819 16:26:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.819 16:26:50 -- accel/accel.sh@20 -- # IFS=: 00:07:12.820 16:26:50 -- accel/accel.sh@20 -- # read -r var val 00:07:12.820 16:26:50 -- accel/accel.sh@21 -- # val=1 00:07:12.820 16:26:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.820 16:26:50 -- accel/accel.sh@20 -- # IFS=: 00:07:12.820 16:26:50 -- accel/accel.sh@20 -- # read -r var val 00:07:12.820 16:26:50 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:12.820 16:26:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.820 16:26:50 -- accel/accel.sh@20 -- # IFS=: 00:07:12.820 16:26:50 -- accel/accel.sh@20 -- # read -r var val 00:07:12.820 16:26:50 -- accel/accel.sh@21 -- # val=Yes 00:07:12.820 16:26:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.820 16:26:50 -- accel/accel.sh@20 -- # IFS=: 00:07:12.820 16:26:50 -- accel/accel.sh@20 -- # read -r var val 00:07:12.820 16:26:50 -- accel/accel.sh@21 -- # val= 00:07:12.820 16:26:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.820 16:26:50 -- accel/accel.sh@20 -- # IFS=: 00:07:12.820 16:26:50 -- accel/accel.sh@20 -- # read -r var val 00:07:12.820 16:26:50 -- accel/accel.sh@21 -- # val= 00:07:12.820 16:26:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.820 16:26:50 -- accel/accel.sh@20 -- # IFS=: 00:07:12.820 16:26:50 -- accel/accel.sh@20 -- # read -r var val 00:07:14.195 16:26:51 -- accel/accel.sh@21 -- # val= 00:07:14.195 16:26:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.195 16:26:51 -- accel/accel.sh@20 -- # IFS=: 00:07:14.195 16:26:51 -- accel/accel.sh@20 -- # read -r var val 00:07:14.195 16:26:51 -- accel/accel.sh@21 -- # val= 00:07:14.195 16:26:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.195 16:26:51 -- accel/accel.sh@20 -- # IFS=: 00:07:14.195 16:26:51 -- accel/accel.sh@20 -- # read -r var val 00:07:14.195 16:26:51 -- accel/accel.sh@21 -- # val= 00:07:14.195 16:26:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.195 16:26:51 -- accel/accel.sh@20 -- # IFS=: 00:07:14.195 16:26:51 -- accel/accel.sh@20 -- # read -r var val 00:07:14.195 16:26:51 -- accel/accel.sh@21 -- # val= 00:07:14.195 16:26:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.195 16:26:51 -- accel/accel.sh@20 -- # IFS=: 00:07:14.195 16:26:51 -- accel/accel.sh@20 -- # read -r var val 00:07:14.195 16:26:51 -- accel/accel.sh@21 -- # val= 00:07:14.195 16:26:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.195 16:26:51 -- accel/accel.sh@20 -- # IFS=: 00:07:14.195 16:26:51 -- accel/accel.sh@20 -- # read -r var val 00:07:14.195 16:26:51 -- accel/accel.sh@21 -- # val= 00:07:14.195 16:26:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.195 16:26:51 -- accel/accel.sh@20 -- # IFS=: 00:07:14.195 16:26:51 -- accel/accel.sh@20 -- # read -r var val 00:07:14.195 16:26:51 -- accel/accel.sh@21 -- # val= 00:07:14.195 16:26:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.195 16:26:51 -- accel/accel.sh@20 -- # IFS=: 00:07:14.195 16:26:51 -- accel/accel.sh@20 -- # read -r var val 00:07:14.195 16:26:51 -- accel/accel.sh@21 -- # val= 00:07:14.195 16:26:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.195 16:26:51 -- accel/accel.sh@20 -- # IFS=: 00:07:14.195 16:26:51 -- accel/accel.sh@20 -- # read -r var val 00:07:14.195 16:26:51 -- accel/accel.sh@21 -- # val= 00:07:14.195 16:26:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.195 16:26:51 -- accel/accel.sh@20 -- # IFS=: 00:07:14.195 16:26:51 -- accel/accel.sh@20 -- # read -r var val 00:07:14.195 16:26:51 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:14.195 16:26:51 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:14.195 16:26:51 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:14.195 00:07:14.195 real 0m3.088s 00:07:14.195 user 0m9.700s 00:07:14.195 sys 0m0.322s 00:07:14.195 16:26:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:14.195 16:26:51 -- common/autotest_common.sh@10 -- # set +x 00:07:14.195 ************************************ 00:07:14.195 END TEST accel_decomp_full_mcore 00:07:14.195 ************************************ 00:07:14.195 16:26:51 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:14.195 16:26:51 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:14.195 16:26:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:14.195 16:26:51 -- common/autotest_common.sh@10 -- # set +x 00:07:14.195 ************************************ 00:07:14.195 START TEST accel_decomp_mthread 00:07:14.195 ************************************ 00:07:14.195 16:26:51 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:14.195 16:26:51 -- accel/accel.sh@16 -- # local accel_opc 00:07:14.195 16:26:51 -- accel/accel.sh@17 -- # local accel_module 00:07:14.195 16:26:51 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:14.195 16:26:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:14.195 16:26:51 -- accel/accel.sh@12 -- # build_accel_config 00:07:14.195 16:26:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:14.195 16:26:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.195 16:26:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.195 16:26:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:14.195 16:26:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:14.195 16:26:51 -- accel/accel.sh@41 -- # local IFS=, 00:07:14.195 16:26:51 -- accel/accel.sh@42 -- # jq -r . 00:07:14.195 [2024-11-16 16:26:51.466198] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:14.195 [2024-11-16 16:26:51.466283] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71469 ] 00:07:14.195 [2024-11-16 16:26:51.598049] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.195 [2024-11-16 16:26:51.674243] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.571 16:26:52 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:15.571 00:07:15.571 SPDK Configuration: 00:07:15.571 Core mask: 0x1 00:07:15.571 00:07:15.571 Accel Perf Configuration: 00:07:15.571 Workload Type: decompress 00:07:15.571 Transfer size: 4096 bytes 00:07:15.571 Vector count 1 00:07:15.571 Module: software 00:07:15.571 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:15.571 Queue depth: 32 00:07:15.571 Allocate depth: 32 00:07:15.571 # threads/core: 2 00:07:15.571 Run time: 1 seconds 00:07:15.571 Verify: Yes 00:07:15.571 00:07:15.571 Running for 1 seconds... 00:07:15.571 00:07:15.571 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:15.571 ------------------------------------------------------------------------------------ 00:07:15.571 0,1 43200/s 79 MiB/s 0 0 00:07:15.571 0,0 43072/s 79 MiB/s 0 0 00:07:15.571 ==================================================================================== 00:07:15.571 Total 86272/s 337 MiB/s 0 0' 00:07:15.571 16:26:52 -- accel/accel.sh@20 -- # IFS=: 00:07:15.571 16:26:52 -- accel/accel.sh@20 -- # read -r var val 00:07:15.571 16:26:52 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:15.571 16:26:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:15.571 16:26:52 -- accel/accel.sh@12 -- # build_accel_config 00:07:15.571 16:26:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:15.571 16:26:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.571 16:26:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.571 16:26:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:15.571 16:26:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:15.571 16:26:52 -- accel/accel.sh@41 -- # local IFS=, 00:07:15.571 16:26:52 -- accel/accel.sh@42 -- # jq -r . 00:07:15.571 [2024-11-16 16:26:52.987820] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:15.571 [2024-11-16 16:26:52.987913] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71489 ] 00:07:15.829 [2024-11-16 16:26:53.115391] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.829 [2024-11-16 16:26:53.182180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.829 16:26:53 -- accel/accel.sh@21 -- # val= 00:07:15.829 16:26:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.829 16:26:53 -- accel/accel.sh@20 -- # IFS=: 00:07:15.829 16:26:53 -- accel/accel.sh@20 -- # read -r var val 00:07:15.829 16:26:53 -- accel/accel.sh@21 -- # val= 00:07:15.829 16:26:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.829 16:26:53 -- accel/accel.sh@20 -- # IFS=: 00:07:15.829 16:26:53 -- accel/accel.sh@20 -- # read -r var val 00:07:15.829 16:26:53 -- accel/accel.sh@21 -- # val= 00:07:15.829 16:26:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.829 16:26:53 -- accel/accel.sh@20 -- # IFS=: 00:07:15.829 16:26:53 -- accel/accel.sh@20 -- # read -r var val 00:07:15.829 16:26:53 -- accel/accel.sh@21 -- # val=0x1 00:07:15.829 16:26:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.829 16:26:53 -- accel/accel.sh@20 -- # IFS=: 00:07:15.829 16:26:53 -- accel/accel.sh@20 -- # read -r var val 00:07:15.829 16:26:53 -- accel/accel.sh@21 -- # val= 00:07:15.829 16:26:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.829 16:26:53 -- accel/accel.sh@20 -- # IFS=: 00:07:15.829 16:26:53 -- accel/accel.sh@20 -- # read -r var val 00:07:15.829 16:26:53 -- accel/accel.sh@21 -- # val= 00:07:15.829 16:26:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.829 16:26:53 -- accel/accel.sh@20 -- # IFS=: 00:07:15.829 16:26:53 -- accel/accel.sh@20 -- # read -r var val 00:07:15.829 16:26:53 -- accel/accel.sh@21 -- # val=decompress 00:07:15.829 16:26:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.830 16:26:53 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:15.830 16:26:53 -- accel/accel.sh@20 -- # IFS=: 00:07:15.830 16:26:53 -- accel/accel.sh@20 -- # read -r var val 00:07:15.830 16:26:53 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:15.830 16:26:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.830 16:26:53 -- accel/accel.sh@20 -- # IFS=: 00:07:15.830 16:26:53 -- accel/accel.sh@20 -- # read -r var val 00:07:15.830 16:26:53 -- accel/accel.sh@21 -- # val= 00:07:15.830 16:26:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.830 16:26:53 -- accel/accel.sh@20 -- # IFS=: 00:07:15.830 16:26:53 -- accel/accel.sh@20 -- # read -r var val 00:07:15.830 16:26:53 -- accel/accel.sh@21 -- # val=software 00:07:15.830 16:26:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.830 16:26:53 -- accel/accel.sh@23 -- # accel_module=software 00:07:15.830 16:26:53 -- accel/accel.sh@20 -- # IFS=: 00:07:15.830 16:26:53 -- accel/accel.sh@20 -- # read -r var val 00:07:15.830 16:26:53 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:15.830 16:26:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.830 16:26:53 -- accel/accel.sh@20 -- # IFS=: 00:07:15.830 16:26:53 -- accel/accel.sh@20 -- # read -r var val 00:07:15.830 16:26:53 -- accel/accel.sh@21 -- # val=32 00:07:15.830 16:26:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.830 16:26:53 -- accel/accel.sh@20 -- # IFS=: 00:07:15.830 16:26:53 -- accel/accel.sh@20 -- # read -r var val 00:07:15.830 16:26:53 -- accel/accel.sh@21 -- # val=32 00:07:15.830 16:26:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.830 16:26:53 -- accel/accel.sh@20 -- # IFS=: 00:07:15.830 16:26:53 -- accel/accel.sh@20 -- # read -r var val 00:07:15.830 16:26:53 -- accel/accel.sh@21 -- # val=2 00:07:15.830 16:26:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.830 16:26:53 -- accel/accel.sh@20 -- # IFS=: 00:07:15.830 16:26:53 -- accel/accel.sh@20 -- # read -r var val 00:07:15.830 16:26:53 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:15.830 16:26:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.830 16:26:53 -- accel/accel.sh@20 -- # IFS=: 00:07:15.830 16:26:53 -- accel/accel.sh@20 -- # read -r var val 00:07:15.830 16:26:53 -- accel/accel.sh@21 -- # val=Yes 00:07:15.830 16:26:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.830 16:26:53 -- accel/accel.sh@20 -- # IFS=: 00:07:15.830 16:26:53 -- accel/accel.sh@20 -- # read -r var val 00:07:15.830 16:26:53 -- accel/accel.sh@21 -- # val= 00:07:15.830 16:26:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.830 16:26:53 -- accel/accel.sh@20 -- # IFS=: 00:07:15.830 16:26:53 -- accel/accel.sh@20 -- # read -r var val 00:07:15.830 16:26:53 -- accel/accel.sh@21 -- # val= 00:07:15.830 16:26:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.830 16:26:53 -- accel/accel.sh@20 -- # IFS=: 00:07:15.830 16:26:53 -- accel/accel.sh@20 -- # read -r var val 00:07:17.205 16:26:54 -- accel/accel.sh@21 -- # val= 00:07:17.205 16:26:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.205 16:26:54 -- accel/accel.sh@20 -- # IFS=: 00:07:17.205 16:26:54 -- accel/accel.sh@20 -- # read -r var val 00:07:17.205 16:26:54 -- accel/accel.sh@21 -- # val= 00:07:17.205 16:26:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.205 16:26:54 -- accel/accel.sh@20 -- # IFS=: 00:07:17.205 16:26:54 -- accel/accel.sh@20 -- # read -r var val 00:07:17.205 16:26:54 -- accel/accel.sh@21 -- # val= 00:07:17.205 16:26:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.205 16:26:54 -- accel/accel.sh@20 -- # IFS=: 00:07:17.205 16:26:54 -- accel/accel.sh@20 -- # read -r var val 00:07:17.205 16:26:54 -- accel/accel.sh@21 -- # val= 00:07:17.205 16:26:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.205 16:26:54 -- accel/accel.sh@20 -- # IFS=: 00:07:17.205 16:26:54 -- accel/accel.sh@20 -- # read -r var val 00:07:17.205 16:26:54 -- accel/accel.sh@21 -- # val= 00:07:17.205 16:26:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.205 16:26:54 -- accel/accel.sh@20 -- # IFS=: 00:07:17.205 16:26:54 -- accel/accel.sh@20 -- # read -r var val 00:07:17.205 16:26:54 -- accel/accel.sh@21 -- # val= 00:07:17.205 16:26:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.205 16:26:54 -- accel/accel.sh@20 -- # IFS=: 00:07:17.205 16:26:54 -- accel/accel.sh@20 -- # read -r var val 00:07:17.205 16:26:54 -- accel/accel.sh@21 -- # val= 00:07:17.205 16:26:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.205 16:26:54 -- accel/accel.sh@20 -- # IFS=: 00:07:17.205 16:26:54 -- accel/accel.sh@20 -- # read -r var val 00:07:17.205 16:26:54 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:17.205 16:26:54 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:17.205 16:26:54 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:17.205 00:07:17.205 real 0m3.029s 00:07:17.205 user 0m2.543s 00:07:17.205 sys 0m0.279s 00:07:17.205 16:26:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:17.205 16:26:54 -- common/autotest_common.sh@10 -- # set +x 00:07:17.205 ************************************ 00:07:17.205 END TEST accel_decomp_mthread 00:07:17.205 ************************************ 00:07:17.205 16:26:54 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:17.205 16:26:54 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:17.205 16:26:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:17.205 16:26:54 -- common/autotest_common.sh@10 -- # set +x 00:07:17.205 ************************************ 00:07:17.205 START TEST accel_deomp_full_mthread 00:07:17.205 ************************************ 00:07:17.205 16:26:54 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:17.205 16:26:54 -- accel/accel.sh@16 -- # local accel_opc 00:07:17.205 16:26:54 -- accel/accel.sh@17 -- # local accel_module 00:07:17.205 16:26:54 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:17.205 16:26:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:17.205 16:26:54 -- accel/accel.sh@12 -- # build_accel_config 00:07:17.205 16:26:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:17.205 16:26:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.205 16:26:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.205 16:26:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:17.205 16:26:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:17.205 16:26:54 -- accel/accel.sh@41 -- # local IFS=, 00:07:17.205 16:26:54 -- accel/accel.sh@42 -- # jq -r . 00:07:17.205 [2024-11-16 16:26:54.553164] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:17.205 [2024-11-16 16:26:54.553255] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71523 ] 00:07:17.205 [2024-11-16 16:26:54.689745] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.464 [2024-11-16 16:26:54.762283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.839 16:26:56 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:18.839 00:07:18.839 SPDK Configuration: 00:07:18.839 Core mask: 0x1 00:07:18.839 00:07:18.839 Accel Perf Configuration: 00:07:18.839 Workload Type: decompress 00:07:18.839 Transfer size: 111250 bytes 00:07:18.839 Vector count 1 00:07:18.839 Module: software 00:07:18.839 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:18.839 Queue depth: 32 00:07:18.839 Allocate depth: 32 00:07:18.839 # threads/core: 2 00:07:18.839 Run time: 1 seconds 00:07:18.839 Verify: Yes 00:07:18.839 00:07:18.839 Running for 1 seconds... 00:07:18.839 00:07:18.839 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:18.839 ------------------------------------------------------------------------------------ 00:07:18.839 0,1 2880/s 118 MiB/s 0 0 00:07:18.839 0,0 2848/s 117 MiB/s 0 0 00:07:18.839 ==================================================================================== 00:07:18.839 Total 5728/s 607 MiB/s 0 0' 00:07:18.839 16:26:56 -- accel/accel.sh@20 -- # IFS=: 00:07:18.839 16:26:56 -- accel/accel.sh@20 -- # read -r var val 00:07:18.839 16:26:56 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:18.839 16:26:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:18.839 16:26:56 -- accel/accel.sh@12 -- # build_accel_config 00:07:18.839 16:26:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:18.839 16:26:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.839 16:26:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.839 16:26:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:18.839 16:26:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:18.839 16:26:56 -- accel/accel.sh@41 -- # local IFS=, 00:07:18.839 16:26:56 -- accel/accel.sh@42 -- # jq -r . 00:07:18.839 [2024-11-16 16:26:56.095981] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:18.839 [2024-11-16 16:26:56.096085] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71543 ] 00:07:18.839 [2024-11-16 16:26:56.230647] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.839 [2024-11-16 16:26:56.301915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.097 16:26:56 -- accel/accel.sh@21 -- # val= 00:07:19.097 16:26:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.097 16:26:56 -- accel/accel.sh@20 -- # IFS=: 00:07:19.097 16:26:56 -- accel/accel.sh@20 -- # read -r var val 00:07:19.097 16:26:56 -- accel/accel.sh@21 -- # val= 00:07:19.097 16:26:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.097 16:26:56 -- accel/accel.sh@20 -- # IFS=: 00:07:19.097 16:26:56 -- accel/accel.sh@20 -- # read -r var val 00:07:19.097 16:26:56 -- accel/accel.sh@21 -- # val= 00:07:19.097 16:26:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.097 16:26:56 -- accel/accel.sh@20 -- # IFS=: 00:07:19.097 16:26:56 -- accel/accel.sh@20 -- # read -r var val 00:07:19.097 16:26:56 -- accel/accel.sh@21 -- # val=0x1 00:07:19.097 16:26:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.097 16:26:56 -- accel/accel.sh@20 -- # IFS=: 00:07:19.097 16:26:56 -- accel/accel.sh@20 -- # read -r var val 00:07:19.097 16:26:56 -- accel/accel.sh@21 -- # val= 00:07:19.097 16:26:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.097 16:26:56 -- accel/accel.sh@20 -- # IFS=: 00:07:19.097 16:26:56 -- accel/accel.sh@20 -- # read -r var val 00:07:19.097 16:26:56 -- accel/accel.sh@21 -- # val= 00:07:19.097 16:26:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.097 16:26:56 -- accel/accel.sh@20 -- # IFS=: 00:07:19.097 16:26:56 -- accel/accel.sh@20 -- # read -r var val 00:07:19.097 16:26:56 -- accel/accel.sh@21 -- # val=decompress 00:07:19.097 16:26:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.097 16:26:56 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:19.097 16:26:56 -- accel/accel.sh@20 -- # IFS=: 00:07:19.097 16:26:56 -- accel/accel.sh@20 -- # read -r var val 00:07:19.097 16:26:56 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:19.097 16:26:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.097 16:26:56 -- accel/accel.sh@20 -- # IFS=: 00:07:19.097 16:26:56 -- accel/accel.sh@20 -- # read -r var val 00:07:19.097 16:26:56 -- accel/accel.sh@21 -- # val= 00:07:19.097 16:26:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.097 16:26:56 -- accel/accel.sh@20 -- # IFS=: 00:07:19.097 16:26:56 -- accel/accel.sh@20 -- # read -r var val 00:07:19.097 16:26:56 -- accel/accel.sh@21 -- # val=software 00:07:19.097 16:26:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.097 16:26:56 -- accel/accel.sh@23 -- # accel_module=software 00:07:19.097 16:26:56 -- accel/accel.sh@20 -- # IFS=: 00:07:19.097 16:26:56 -- accel/accel.sh@20 -- # read -r var val 00:07:19.097 16:26:56 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:19.097 16:26:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.097 16:26:56 -- accel/accel.sh@20 -- # IFS=: 00:07:19.097 16:26:56 -- accel/accel.sh@20 -- # read -r var val 00:07:19.097 16:26:56 -- accel/accel.sh@21 -- # val=32 00:07:19.097 16:26:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.097 16:26:56 -- accel/accel.sh@20 -- # IFS=: 00:07:19.097 16:26:56 -- accel/accel.sh@20 -- # read -r var val 00:07:19.097 16:26:56 -- accel/accel.sh@21 -- # val=32 00:07:19.098 16:26:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.098 16:26:56 -- accel/accel.sh@20 -- # IFS=: 00:07:19.098 16:26:56 -- accel/accel.sh@20 -- # read -r var val 00:07:19.098 16:26:56 -- accel/accel.sh@21 -- # val=2 00:07:19.098 16:26:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.098 16:26:56 -- accel/accel.sh@20 -- # IFS=: 00:07:19.098 16:26:56 -- accel/accel.sh@20 -- # read -r var val 00:07:19.098 16:26:56 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:19.098 16:26:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.098 16:26:56 -- accel/accel.sh@20 -- # IFS=: 00:07:19.098 16:26:56 -- accel/accel.sh@20 -- # read -r var val 00:07:19.098 16:26:56 -- accel/accel.sh@21 -- # val=Yes 00:07:19.098 16:26:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.098 16:26:56 -- accel/accel.sh@20 -- # IFS=: 00:07:19.098 16:26:56 -- accel/accel.sh@20 -- # read -r var val 00:07:19.098 16:26:56 -- accel/accel.sh@21 -- # val= 00:07:19.098 16:26:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.098 16:26:56 -- accel/accel.sh@20 -- # IFS=: 00:07:19.098 16:26:56 -- accel/accel.sh@20 -- # read -r var val 00:07:19.098 16:26:56 -- accel/accel.sh@21 -- # val= 00:07:19.098 16:26:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.098 16:26:56 -- accel/accel.sh@20 -- # IFS=: 00:07:19.098 16:26:56 -- accel/accel.sh@20 -- # read -r var val 00:07:20.473 16:26:57 -- accel/accel.sh@21 -- # val= 00:07:20.473 16:26:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.473 16:26:57 -- accel/accel.sh@20 -- # IFS=: 00:07:20.473 16:26:57 -- accel/accel.sh@20 -- # read -r var val 00:07:20.473 16:26:57 -- accel/accel.sh@21 -- # val= 00:07:20.473 16:26:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.473 16:26:57 -- accel/accel.sh@20 -- # IFS=: 00:07:20.473 16:26:57 -- accel/accel.sh@20 -- # read -r var val 00:07:20.473 16:26:57 -- accel/accel.sh@21 -- # val= 00:07:20.473 16:26:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.473 16:26:57 -- accel/accel.sh@20 -- # IFS=: 00:07:20.473 16:26:57 -- accel/accel.sh@20 -- # read -r var val 00:07:20.473 16:26:57 -- accel/accel.sh@21 -- # val= 00:07:20.473 16:26:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.473 16:26:57 -- accel/accel.sh@20 -- # IFS=: 00:07:20.473 16:26:57 -- accel/accel.sh@20 -- # read -r var val 00:07:20.473 16:26:57 -- accel/accel.sh@21 -- # val= 00:07:20.473 16:26:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.473 16:26:57 -- accel/accel.sh@20 -- # IFS=: 00:07:20.473 16:26:57 -- accel/accel.sh@20 -- # read -r var val 00:07:20.473 16:26:57 -- accel/accel.sh@21 -- # val= 00:07:20.473 16:26:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.473 16:26:57 -- accel/accel.sh@20 -- # IFS=: 00:07:20.473 16:26:57 -- accel/accel.sh@20 -- # read -r var val 00:07:20.473 16:26:57 -- accel/accel.sh@21 -- # val= 00:07:20.473 16:26:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.473 16:26:57 -- accel/accel.sh@20 -- # IFS=: 00:07:20.473 16:26:57 -- accel/accel.sh@20 -- # read -r var val 00:07:20.473 16:26:57 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:20.473 16:26:57 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:20.473 16:26:57 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:20.473 00:07:20.473 real 0m3.089s 00:07:20.473 user 0m2.597s 00:07:20.473 sys 0m0.283s 00:07:20.473 16:26:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:20.473 16:26:57 -- common/autotest_common.sh@10 -- # set +x 00:07:20.473 ************************************ 00:07:20.473 END TEST accel_deomp_full_mthread 00:07:20.473 ************************************ 00:07:20.473 16:26:57 -- accel/accel.sh@116 -- # [[ n == y ]] 00:07:20.473 16:26:57 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:20.473 16:26:57 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:20.473 16:26:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:20.473 16:26:57 -- accel/accel.sh@129 -- # build_accel_config 00:07:20.473 16:26:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:20.473 16:26:57 -- common/autotest_common.sh@10 -- # set +x 00:07:20.473 16:26:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.473 16:26:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.473 16:26:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:20.473 16:26:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:20.473 16:26:57 -- accel/accel.sh@41 -- # local IFS=, 00:07:20.473 16:26:57 -- accel/accel.sh@42 -- # jq -r . 00:07:20.473 ************************************ 00:07:20.473 START TEST accel_dif_functional_tests 00:07:20.473 ************************************ 00:07:20.473 16:26:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:20.473 [2024-11-16 16:26:57.726059] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:20.473 [2024-11-16 16:26:57.726164] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71584 ] 00:07:20.473 [2024-11-16 16:26:57.856437] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:20.473 [2024-11-16 16:26:57.932080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:20.473 [2024-11-16 16:26:57.932232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:20.473 [2024-11-16 16:26:57.932249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.732 00:07:20.732 00:07:20.732 CUnit - A unit testing framework for C - Version 2.1-3 00:07:20.732 http://cunit.sourceforge.net/ 00:07:20.732 00:07:20.732 00:07:20.732 Suite: accel_dif 00:07:20.732 Test: verify: DIF generated, GUARD check ...passed 00:07:20.732 Test: verify: DIF generated, APPTAG check ...passed 00:07:20.732 Test: verify: DIF generated, REFTAG check ...passed 00:07:20.732 Test: verify: DIF not generated, GUARD check ...passed 00:07:20.733 Test: verify: DIF not generated, APPTAG check ...[2024-11-16 16:26:58.048903] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:20.733 [2024-11-16 16:26:58.048973] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:20.733 [2024-11-16 16:26:58.049037] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:20.733 passed 00:07:20.733 Test: verify: DIF not generated, REFTAG check ...[2024-11-16 16:26:58.049087] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:20.733 [2024-11-16 16:26:58.049121] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:20.733 passed 00:07:20.733 Test: verify: APPTAG correct, APPTAG check ...[2024-11-16 16:26:58.049474] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:20.733 passed 00:07:20.733 Test: verify: APPTAG incorrect, APPTAG check ...[2024-11-16 16:26:58.049556] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:20.733 passed 00:07:20.733 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:20.733 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:20.733 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:20.733 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:07:20.733 Test: generate copy: DIF generated, GUARD check ...[2024-11-16 16:26:58.049800] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:20.733 passed 00:07:20.733 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:20.733 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:20.733 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:20.733 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:20.733 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:20.733 Test: generate copy: iovecs-len validate ...[2024-11-16 16:26:58.050382] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:20.733 passed 00:07:20.733 Test: generate copy: buffer alignment validate ...passed 00:07:20.733 00:07:20.733 Run Summary: Type Total Ran Passed Failed Inactive 00:07:20.733 suites 1 1 n/a 0 0 00:07:20.733 tests 20 20 20 0 0 00:07:20.733 asserts 204 204 204 0 n/a 00:07:20.733 00:07:20.733 Elapsed time = 0.004 seconds 00:07:20.992 00:07:20.992 real 0m0.639s 00:07:20.992 user 0m0.927s 00:07:20.992 sys 0m0.186s 00:07:20.992 ************************************ 00:07:20.992 END TEST accel_dif_functional_tests 00:07:20.992 ************************************ 00:07:20.992 16:26:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:20.992 16:26:58 -- common/autotest_common.sh@10 -- # set +x 00:07:20.992 00:07:20.992 real 1m3.164s 00:07:20.992 user 1m7.289s 00:07:20.992 sys 0m6.848s 00:07:20.992 16:26:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:20.992 16:26:58 -- common/autotest_common.sh@10 -- # set +x 00:07:20.992 ************************************ 00:07:20.992 END TEST accel 00:07:20.992 ************************************ 00:07:20.992 16:26:58 -- spdk/autotest.sh@177 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:20.992 16:26:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:20.992 16:26:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:20.992 16:26:58 -- common/autotest_common.sh@10 -- # set +x 00:07:20.992 ************************************ 00:07:20.992 START TEST accel_rpc 00:07:20.993 ************************************ 00:07:20.993 16:26:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:21.252 * Looking for test storage... 00:07:21.252 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:21.252 16:26:58 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:21.252 16:26:58 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:21.252 16:26:58 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:21.252 16:26:58 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:21.252 16:26:58 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:21.252 16:26:58 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:21.252 16:26:58 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:21.252 16:26:58 -- scripts/common.sh@335 -- # IFS=.-: 00:07:21.252 16:26:58 -- scripts/common.sh@335 -- # read -ra ver1 00:07:21.252 16:26:58 -- scripts/common.sh@336 -- # IFS=.-: 00:07:21.252 16:26:58 -- scripts/common.sh@336 -- # read -ra ver2 00:07:21.252 16:26:58 -- scripts/common.sh@337 -- # local 'op=<' 00:07:21.252 16:26:58 -- scripts/common.sh@339 -- # ver1_l=2 00:07:21.252 16:26:58 -- scripts/common.sh@340 -- # ver2_l=1 00:07:21.252 16:26:58 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:21.252 16:26:58 -- scripts/common.sh@343 -- # case "$op" in 00:07:21.252 16:26:58 -- scripts/common.sh@344 -- # : 1 00:07:21.252 16:26:58 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:21.252 16:26:58 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:21.252 16:26:58 -- scripts/common.sh@364 -- # decimal 1 00:07:21.252 16:26:58 -- scripts/common.sh@352 -- # local d=1 00:07:21.252 16:26:58 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:21.252 16:26:58 -- scripts/common.sh@354 -- # echo 1 00:07:21.252 16:26:58 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:21.252 16:26:58 -- scripts/common.sh@365 -- # decimal 2 00:07:21.252 16:26:58 -- scripts/common.sh@352 -- # local d=2 00:07:21.252 16:26:58 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:21.252 16:26:58 -- scripts/common.sh@354 -- # echo 2 00:07:21.252 16:26:58 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:21.252 16:26:58 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:21.252 16:26:58 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:21.252 16:26:58 -- scripts/common.sh@367 -- # return 0 00:07:21.252 16:26:58 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:21.252 16:26:58 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:21.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.252 --rc genhtml_branch_coverage=1 00:07:21.252 --rc genhtml_function_coverage=1 00:07:21.252 --rc genhtml_legend=1 00:07:21.252 --rc geninfo_all_blocks=1 00:07:21.252 --rc geninfo_unexecuted_blocks=1 00:07:21.252 00:07:21.252 ' 00:07:21.252 16:26:58 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:21.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.252 --rc genhtml_branch_coverage=1 00:07:21.252 --rc genhtml_function_coverage=1 00:07:21.252 --rc genhtml_legend=1 00:07:21.252 --rc geninfo_all_blocks=1 00:07:21.252 --rc geninfo_unexecuted_blocks=1 00:07:21.252 00:07:21.252 ' 00:07:21.252 16:26:58 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:21.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.252 --rc genhtml_branch_coverage=1 00:07:21.252 --rc genhtml_function_coverage=1 00:07:21.252 --rc genhtml_legend=1 00:07:21.252 --rc geninfo_all_blocks=1 00:07:21.252 --rc geninfo_unexecuted_blocks=1 00:07:21.252 00:07:21.252 ' 00:07:21.252 16:26:58 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:21.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.252 --rc genhtml_branch_coverage=1 00:07:21.252 --rc genhtml_function_coverage=1 00:07:21.252 --rc genhtml_legend=1 00:07:21.252 --rc geninfo_all_blocks=1 00:07:21.252 --rc geninfo_unexecuted_blocks=1 00:07:21.252 00:07:21.252 ' 00:07:21.252 16:26:58 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:21.252 16:26:58 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=71655 00:07:21.252 16:26:58 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:21.252 16:26:58 -- accel/accel_rpc.sh@15 -- # waitforlisten 71655 00:07:21.252 16:26:58 -- common/autotest_common.sh@829 -- # '[' -z 71655 ']' 00:07:21.252 16:26:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.252 16:26:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:21.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.252 16:26:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.252 16:26:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:21.252 16:26:58 -- common/autotest_common.sh@10 -- # set +x 00:07:21.252 [2024-11-16 16:26:58.643256] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:21.252 [2024-11-16 16:26:58.643380] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71655 ] 00:07:21.512 [2024-11-16 16:26:58.781904] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.512 [2024-11-16 16:26:58.847540] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:21.512 [2024-11-16 16:26:58.847724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.512 16:26:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:21.512 16:26:58 -- common/autotest_common.sh@862 -- # return 0 00:07:21.512 16:26:58 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:21.512 16:26:58 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:21.512 16:26:58 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:21.512 16:26:58 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:21.512 16:26:58 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:21.512 16:26:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:21.512 16:26:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:21.512 16:26:58 -- common/autotest_common.sh@10 -- # set +x 00:07:21.512 ************************************ 00:07:21.512 START TEST accel_assign_opcode 00:07:21.512 ************************************ 00:07:21.512 16:26:58 -- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite 00:07:21.512 16:26:58 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:21.512 16:26:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.512 16:26:58 -- common/autotest_common.sh@10 -- # set +x 00:07:21.512 [2024-11-16 16:26:58.904300] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:21.512 16:26:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.512 16:26:58 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:21.512 16:26:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.512 16:26:58 -- common/autotest_common.sh@10 -- # set +x 00:07:21.512 [2024-11-16 16:26:58.912299] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:21.512 16:26:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.512 16:26:58 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:21.512 16:26:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.512 16:26:58 -- common/autotest_common.sh@10 -- # set +x 00:07:21.771 16:26:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.771 16:26:59 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:21.771 16:26:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.771 16:26:59 -- common/autotest_common.sh@10 -- # set +x 00:07:21.771 16:26:59 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:21.771 16:26:59 -- accel/accel_rpc.sh@42 -- # grep software 00:07:21.771 16:26:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.771 software 00:07:21.771 00:07:21.771 real 0m0.357s 00:07:21.771 user 0m0.056s 00:07:21.771 sys 0m0.012s 00:07:21.771 16:26:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:21.771 16:26:59 -- common/autotest_common.sh@10 -- # set +x 00:07:21.771 ************************************ 00:07:21.771 END TEST accel_assign_opcode 00:07:21.771 ************************************ 00:07:22.030 16:26:59 -- accel/accel_rpc.sh@55 -- # killprocess 71655 00:07:22.030 16:26:59 -- common/autotest_common.sh@936 -- # '[' -z 71655 ']' 00:07:22.030 16:26:59 -- common/autotest_common.sh@940 -- # kill -0 71655 00:07:22.030 16:26:59 -- common/autotest_common.sh@941 -- # uname 00:07:22.030 16:26:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:22.030 16:26:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71655 00:07:22.030 16:26:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:22.030 16:26:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:22.030 killing process with pid 71655 00:07:22.030 16:26:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71655' 00:07:22.030 16:26:59 -- common/autotest_common.sh@955 -- # kill 71655 00:07:22.030 16:26:59 -- common/autotest_common.sh@960 -- # wait 71655 00:07:22.598 00:07:22.598 real 0m1.448s 00:07:22.598 user 0m1.265s 00:07:22.598 sys 0m0.523s 00:07:22.599 16:26:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:22.599 16:26:59 -- common/autotest_common.sh@10 -- # set +x 00:07:22.599 ************************************ 00:07:22.599 END TEST accel_rpc 00:07:22.599 ************************************ 00:07:22.599 16:26:59 -- spdk/autotest.sh@178 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:22.599 16:26:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:22.599 16:26:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:22.599 16:26:59 -- common/autotest_common.sh@10 -- # set +x 00:07:22.599 ************************************ 00:07:22.599 START TEST app_cmdline 00:07:22.599 ************************************ 00:07:22.599 16:26:59 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:22.599 * Looking for test storage... 00:07:22.599 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:22.599 16:26:59 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:22.599 16:26:59 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:22.599 16:26:59 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:22.857 16:27:00 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:22.857 16:27:00 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:22.857 16:27:00 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:22.857 16:27:00 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:22.857 16:27:00 -- scripts/common.sh@335 -- # IFS=.-: 00:07:22.857 16:27:00 -- scripts/common.sh@335 -- # read -ra ver1 00:07:22.857 16:27:00 -- scripts/common.sh@336 -- # IFS=.-: 00:07:22.857 16:27:00 -- scripts/common.sh@336 -- # read -ra ver2 00:07:22.857 16:27:00 -- scripts/common.sh@337 -- # local 'op=<' 00:07:22.857 16:27:00 -- scripts/common.sh@339 -- # ver1_l=2 00:07:22.857 16:27:00 -- scripts/common.sh@340 -- # ver2_l=1 00:07:22.857 16:27:00 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:22.857 16:27:00 -- scripts/common.sh@343 -- # case "$op" in 00:07:22.857 16:27:00 -- scripts/common.sh@344 -- # : 1 00:07:22.857 16:27:00 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:22.857 16:27:00 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:22.857 16:27:00 -- scripts/common.sh@364 -- # decimal 1 00:07:22.857 16:27:00 -- scripts/common.sh@352 -- # local d=1 00:07:22.857 16:27:00 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:22.857 16:27:00 -- scripts/common.sh@354 -- # echo 1 00:07:22.857 16:27:00 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:22.857 16:27:00 -- scripts/common.sh@365 -- # decimal 2 00:07:22.857 16:27:00 -- scripts/common.sh@352 -- # local d=2 00:07:22.857 16:27:00 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:22.857 16:27:00 -- scripts/common.sh@354 -- # echo 2 00:07:22.857 16:27:00 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:22.857 16:27:00 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:22.857 16:27:00 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:22.857 16:27:00 -- scripts/common.sh@367 -- # return 0 00:07:22.857 16:27:00 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:22.857 16:27:00 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:22.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.857 --rc genhtml_branch_coverage=1 00:07:22.857 --rc genhtml_function_coverage=1 00:07:22.857 --rc genhtml_legend=1 00:07:22.857 --rc geninfo_all_blocks=1 00:07:22.857 --rc geninfo_unexecuted_blocks=1 00:07:22.857 00:07:22.857 ' 00:07:22.857 16:27:00 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:22.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.857 --rc genhtml_branch_coverage=1 00:07:22.857 --rc genhtml_function_coverage=1 00:07:22.857 --rc genhtml_legend=1 00:07:22.857 --rc geninfo_all_blocks=1 00:07:22.857 --rc geninfo_unexecuted_blocks=1 00:07:22.857 00:07:22.857 ' 00:07:22.857 16:27:00 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:22.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.857 --rc genhtml_branch_coverage=1 00:07:22.857 --rc genhtml_function_coverage=1 00:07:22.857 --rc genhtml_legend=1 00:07:22.857 --rc geninfo_all_blocks=1 00:07:22.857 --rc geninfo_unexecuted_blocks=1 00:07:22.857 00:07:22.857 ' 00:07:22.857 16:27:00 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:22.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.857 --rc genhtml_branch_coverage=1 00:07:22.857 --rc genhtml_function_coverage=1 00:07:22.857 --rc genhtml_legend=1 00:07:22.857 --rc geninfo_all_blocks=1 00:07:22.857 --rc geninfo_unexecuted_blocks=1 00:07:22.857 00:07:22.857 ' 00:07:22.857 16:27:00 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:22.857 16:27:00 -- app/cmdline.sh@17 -- # spdk_tgt_pid=71760 00:07:22.857 16:27:00 -- app/cmdline.sh@18 -- # waitforlisten 71760 00:07:22.857 16:27:00 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:22.857 16:27:00 -- common/autotest_common.sh@829 -- # '[' -z 71760 ']' 00:07:22.857 16:27:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.857 16:27:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:22.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.857 16:27:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.857 16:27:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:22.857 16:27:00 -- common/autotest_common.sh@10 -- # set +x 00:07:22.857 [2024-11-16 16:27:00.165949] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:22.857 [2024-11-16 16:27:00.166073] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71760 ] 00:07:22.857 [2024-11-16 16:27:00.305066] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.114 [2024-11-16 16:27:00.380478] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:23.114 [2024-11-16 16:27:00.380669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.048 16:27:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:24.048 16:27:01 -- common/autotest_common.sh@862 -- # return 0 00:07:24.048 16:27:01 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:24.048 { 00:07:24.048 "fields": { 00:07:24.048 "commit": "c13c99a5e", 00:07:24.048 "major": 24, 00:07:24.048 "minor": 1, 00:07:24.048 "patch": 1, 00:07:24.048 "suffix": "-pre" 00:07:24.048 }, 00:07:24.048 "version": "SPDK v24.01.1-pre git sha1 c13c99a5e" 00:07:24.048 } 00:07:24.048 16:27:01 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:24.048 16:27:01 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:24.048 16:27:01 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:24.049 16:27:01 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:24.049 16:27:01 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:24.049 16:27:01 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:24.049 16:27:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.049 16:27:01 -- common/autotest_common.sh@10 -- # set +x 00:07:24.049 16:27:01 -- app/cmdline.sh@26 -- # sort 00:07:24.049 16:27:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.049 16:27:01 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:24.049 16:27:01 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:24.049 16:27:01 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:24.049 16:27:01 -- common/autotest_common.sh@650 -- # local es=0 00:07:24.049 16:27:01 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:24.049 16:27:01 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:24.049 16:27:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:24.049 16:27:01 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:24.049 16:27:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:24.049 16:27:01 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:24.049 16:27:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:24.049 16:27:01 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:24.049 16:27:01 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:24.049 16:27:01 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:24.308 2024/11/16 16:27:01 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:07:24.308 request: 00:07:24.308 { 00:07:24.308 "method": "env_dpdk_get_mem_stats", 00:07:24.308 "params": {} 00:07:24.308 } 00:07:24.308 Got JSON-RPC error response 00:07:24.308 GoRPCClient: error on JSON-RPC call 00:07:24.308 16:27:01 -- common/autotest_common.sh@653 -- # es=1 00:07:24.308 16:27:01 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:24.308 16:27:01 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:24.308 16:27:01 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:24.308 16:27:01 -- app/cmdline.sh@1 -- # killprocess 71760 00:07:24.308 16:27:01 -- common/autotest_common.sh@936 -- # '[' -z 71760 ']' 00:07:24.308 16:27:01 -- common/autotest_common.sh@940 -- # kill -0 71760 00:07:24.308 16:27:01 -- common/autotest_common.sh@941 -- # uname 00:07:24.308 16:27:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:24.308 16:27:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71760 00:07:24.308 16:27:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:24.308 16:27:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:24.308 killing process with pid 71760 00:07:24.308 16:27:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71760' 00:07:24.308 16:27:01 -- common/autotest_common.sh@955 -- # kill 71760 00:07:24.308 16:27:01 -- common/autotest_common.sh@960 -- # wait 71760 00:07:24.877 00:07:24.877 real 0m2.363s 00:07:24.877 user 0m2.789s 00:07:24.877 sys 0m0.612s 00:07:24.877 16:27:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:24.877 16:27:02 -- common/autotest_common.sh@10 -- # set +x 00:07:24.877 ************************************ 00:07:24.877 END TEST app_cmdline 00:07:24.877 ************************************ 00:07:24.877 16:27:02 -- spdk/autotest.sh@179 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:24.877 16:27:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:24.877 16:27:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:24.877 16:27:02 -- common/autotest_common.sh@10 -- # set +x 00:07:24.877 ************************************ 00:07:24.877 START TEST version 00:07:24.877 ************************************ 00:07:24.877 16:27:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:25.136 * Looking for test storage... 00:07:25.136 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:25.136 16:27:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:25.136 16:27:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:25.136 16:27:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:25.136 16:27:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:25.136 16:27:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:25.136 16:27:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:25.136 16:27:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:25.136 16:27:02 -- scripts/common.sh@335 -- # IFS=.-: 00:07:25.136 16:27:02 -- scripts/common.sh@335 -- # read -ra ver1 00:07:25.136 16:27:02 -- scripts/common.sh@336 -- # IFS=.-: 00:07:25.136 16:27:02 -- scripts/common.sh@336 -- # read -ra ver2 00:07:25.136 16:27:02 -- scripts/common.sh@337 -- # local 'op=<' 00:07:25.136 16:27:02 -- scripts/common.sh@339 -- # ver1_l=2 00:07:25.136 16:27:02 -- scripts/common.sh@340 -- # ver2_l=1 00:07:25.136 16:27:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:25.136 16:27:02 -- scripts/common.sh@343 -- # case "$op" in 00:07:25.136 16:27:02 -- scripts/common.sh@344 -- # : 1 00:07:25.136 16:27:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:25.136 16:27:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:25.136 16:27:02 -- scripts/common.sh@364 -- # decimal 1 00:07:25.136 16:27:02 -- scripts/common.sh@352 -- # local d=1 00:07:25.136 16:27:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:25.136 16:27:02 -- scripts/common.sh@354 -- # echo 1 00:07:25.136 16:27:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:25.136 16:27:02 -- scripts/common.sh@365 -- # decimal 2 00:07:25.136 16:27:02 -- scripts/common.sh@352 -- # local d=2 00:07:25.136 16:27:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:25.136 16:27:02 -- scripts/common.sh@354 -- # echo 2 00:07:25.136 16:27:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:25.136 16:27:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:25.136 16:27:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:25.136 16:27:02 -- scripts/common.sh@367 -- # return 0 00:07:25.136 16:27:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:25.136 16:27:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:25.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.136 --rc genhtml_branch_coverage=1 00:07:25.136 --rc genhtml_function_coverage=1 00:07:25.136 --rc genhtml_legend=1 00:07:25.136 --rc geninfo_all_blocks=1 00:07:25.136 --rc geninfo_unexecuted_blocks=1 00:07:25.136 00:07:25.136 ' 00:07:25.136 16:27:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:25.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.136 --rc genhtml_branch_coverage=1 00:07:25.136 --rc genhtml_function_coverage=1 00:07:25.136 --rc genhtml_legend=1 00:07:25.136 --rc geninfo_all_blocks=1 00:07:25.136 --rc geninfo_unexecuted_blocks=1 00:07:25.137 00:07:25.137 ' 00:07:25.137 16:27:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:25.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.137 --rc genhtml_branch_coverage=1 00:07:25.137 --rc genhtml_function_coverage=1 00:07:25.137 --rc genhtml_legend=1 00:07:25.137 --rc geninfo_all_blocks=1 00:07:25.137 --rc geninfo_unexecuted_blocks=1 00:07:25.137 00:07:25.137 ' 00:07:25.137 16:27:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:25.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.137 --rc genhtml_branch_coverage=1 00:07:25.137 --rc genhtml_function_coverage=1 00:07:25.137 --rc genhtml_legend=1 00:07:25.137 --rc geninfo_all_blocks=1 00:07:25.137 --rc geninfo_unexecuted_blocks=1 00:07:25.137 00:07:25.137 ' 00:07:25.137 16:27:02 -- app/version.sh@17 -- # get_header_version major 00:07:25.137 16:27:02 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:25.137 16:27:02 -- app/version.sh@14 -- # cut -f2 00:07:25.137 16:27:02 -- app/version.sh@14 -- # tr -d '"' 00:07:25.137 16:27:02 -- app/version.sh@17 -- # major=24 00:07:25.137 16:27:02 -- app/version.sh@18 -- # get_header_version minor 00:07:25.137 16:27:02 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:25.137 16:27:02 -- app/version.sh@14 -- # cut -f2 00:07:25.137 16:27:02 -- app/version.sh@14 -- # tr -d '"' 00:07:25.137 16:27:02 -- app/version.sh@18 -- # minor=1 00:07:25.137 16:27:02 -- app/version.sh@19 -- # get_header_version patch 00:07:25.137 16:27:02 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:25.137 16:27:02 -- app/version.sh@14 -- # tr -d '"' 00:07:25.137 16:27:02 -- app/version.sh@14 -- # cut -f2 00:07:25.137 16:27:02 -- app/version.sh@19 -- # patch=1 00:07:25.137 16:27:02 -- app/version.sh@20 -- # get_header_version suffix 00:07:25.137 16:27:02 -- app/version.sh@14 -- # cut -f2 00:07:25.137 16:27:02 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:25.137 16:27:02 -- app/version.sh@14 -- # tr -d '"' 00:07:25.137 16:27:02 -- app/version.sh@20 -- # suffix=-pre 00:07:25.137 16:27:02 -- app/version.sh@22 -- # version=24.1 00:07:25.137 16:27:02 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:25.137 16:27:02 -- app/version.sh@25 -- # version=24.1.1 00:07:25.137 16:27:02 -- app/version.sh@28 -- # version=24.1.1rc0 00:07:25.137 16:27:02 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:25.137 16:27:02 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:25.137 16:27:02 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:07:25.137 16:27:02 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:07:25.137 00:07:25.137 real 0m0.261s 00:07:25.137 user 0m0.169s 00:07:25.137 sys 0m0.131s 00:07:25.137 16:27:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:25.137 16:27:02 -- common/autotest_common.sh@10 -- # set +x 00:07:25.137 ************************************ 00:07:25.137 END TEST version 00:07:25.137 ************************************ 00:07:25.396 16:27:02 -- spdk/autotest.sh@181 -- # '[' 0 -eq 1 ']' 00:07:25.396 16:27:02 -- spdk/autotest.sh@191 -- # uname -s 00:07:25.396 16:27:02 -- spdk/autotest.sh@191 -- # [[ Linux == Linux ]] 00:07:25.396 16:27:02 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:07:25.396 16:27:02 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:07:25.396 16:27:02 -- spdk/autotest.sh@204 -- # '[' 0 -eq 1 ']' 00:07:25.396 16:27:02 -- spdk/autotest.sh@251 -- # '[' 0 -eq 1 ']' 00:07:25.396 16:27:02 -- spdk/autotest.sh@255 -- # timing_exit lib 00:07:25.396 16:27:02 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:25.396 16:27:02 -- common/autotest_common.sh@10 -- # set +x 00:07:25.396 16:27:02 -- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']' 00:07:25.396 16:27:02 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:07:25.396 16:27:02 -- spdk/autotest.sh@274 -- # '[' 1 -eq 1 ']' 00:07:25.396 16:27:02 -- spdk/autotest.sh@275 -- # export NET_TYPE 00:07:25.396 16:27:02 -- spdk/autotest.sh@278 -- # '[' tcp = rdma ']' 00:07:25.396 16:27:02 -- spdk/autotest.sh@281 -- # '[' tcp = tcp ']' 00:07:25.396 16:27:02 -- spdk/autotest.sh@282 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:25.396 16:27:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:25.396 16:27:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:25.396 16:27:02 -- common/autotest_common.sh@10 -- # set +x 00:07:25.396 ************************************ 00:07:25.396 START TEST nvmf_tcp 00:07:25.396 ************************************ 00:07:25.396 16:27:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:25.396 * Looking for test storage... 00:07:25.396 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:25.396 16:27:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:25.396 16:27:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:25.396 16:27:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:25.396 16:27:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:25.396 16:27:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:25.396 16:27:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:25.396 16:27:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:25.396 16:27:02 -- scripts/common.sh@335 -- # IFS=.-: 00:07:25.396 16:27:02 -- scripts/common.sh@335 -- # read -ra ver1 00:07:25.396 16:27:02 -- scripts/common.sh@336 -- # IFS=.-: 00:07:25.396 16:27:02 -- scripts/common.sh@336 -- # read -ra ver2 00:07:25.396 16:27:02 -- scripts/common.sh@337 -- # local 'op=<' 00:07:25.396 16:27:02 -- scripts/common.sh@339 -- # ver1_l=2 00:07:25.396 16:27:02 -- scripts/common.sh@340 -- # ver2_l=1 00:07:25.396 16:27:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:25.396 16:27:02 -- scripts/common.sh@343 -- # case "$op" in 00:07:25.396 16:27:02 -- scripts/common.sh@344 -- # : 1 00:07:25.396 16:27:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:25.396 16:27:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:25.396 16:27:02 -- scripts/common.sh@364 -- # decimal 1 00:07:25.396 16:27:02 -- scripts/common.sh@352 -- # local d=1 00:07:25.396 16:27:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:25.396 16:27:02 -- scripts/common.sh@354 -- # echo 1 00:07:25.396 16:27:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:25.396 16:27:02 -- scripts/common.sh@365 -- # decimal 2 00:07:25.396 16:27:02 -- scripts/common.sh@352 -- # local d=2 00:07:25.396 16:27:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:25.396 16:27:02 -- scripts/common.sh@354 -- # echo 2 00:07:25.396 16:27:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:25.396 16:27:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:25.396 16:27:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:25.396 16:27:02 -- scripts/common.sh@367 -- # return 0 00:07:25.396 16:27:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:25.396 16:27:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:25.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.396 --rc genhtml_branch_coverage=1 00:07:25.396 --rc genhtml_function_coverage=1 00:07:25.396 --rc genhtml_legend=1 00:07:25.396 --rc geninfo_all_blocks=1 00:07:25.396 --rc geninfo_unexecuted_blocks=1 00:07:25.396 00:07:25.396 ' 00:07:25.396 16:27:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:25.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.396 --rc genhtml_branch_coverage=1 00:07:25.396 --rc genhtml_function_coverage=1 00:07:25.396 --rc genhtml_legend=1 00:07:25.396 --rc geninfo_all_blocks=1 00:07:25.396 --rc geninfo_unexecuted_blocks=1 00:07:25.396 00:07:25.396 ' 00:07:25.396 16:27:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:25.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.396 --rc genhtml_branch_coverage=1 00:07:25.396 --rc genhtml_function_coverage=1 00:07:25.396 --rc genhtml_legend=1 00:07:25.396 --rc geninfo_all_blocks=1 00:07:25.396 --rc geninfo_unexecuted_blocks=1 00:07:25.396 00:07:25.396 ' 00:07:25.396 16:27:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:25.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.396 --rc genhtml_branch_coverage=1 00:07:25.396 --rc genhtml_function_coverage=1 00:07:25.396 --rc genhtml_legend=1 00:07:25.396 --rc geninfo_all_blocks=1 00:07:25.396 --rc geninfo_unexecuted_blocks=1 00:07:25.396 00:07:25.396 ' 00:07:25.396 16:27:02 -- nvmf/nvmf.sh@10 -- # uname -s 00:07:25.396 16:27:02 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:25.396 16:27:02 -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:25.656 16:27:02 -- nvmf/common.sh@7 -- # uname -s 00:07:25.656 16:27:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:25.656 16:27:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:25.656 16:27:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:25.656 16:27:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:25.656 16:27:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:25.656 16:27:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:25.656 16:27:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:25.656 16:27:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:25.656 16:27:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:25.657 16:27:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:25.657 16:27:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:07:25.657 16:27:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:07:25.657 16:27:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:25.657 16:27:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:25.657 16:27:02 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:25.657 16:27:02 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:25.657 16:27:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:25.657 16:27:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:25.657 16:27:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:25.657 16:27:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.657 16:27:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.657 16:27:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.657 16:27:02 -- paths/export.sh@5 -- # export PATH 00:07:25.657 16:27:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.657 16:27:02 -- nvmf/common.sh@46 -- # : 0 00:07:25.657 16:27:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:25.657 16:27:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:25.657 16:27:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:25.657 16:27:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:25.657 16:27:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:25.657 16:27:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:25.657 16:27:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:25.657 16:27:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:25.657 16:27:02 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:25.657 16:27:02 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:25.657 16:27:02 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:25.657 16:27:02 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:25.657 16:27:02 -- common/autotest_common.sh@10 -- # set +x 00:07:25.657 16:27:02 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:25.657 16:27:02 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:25.657 16:27:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:25.657 16:27:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:25.657 16:27:02 -- common/autotest_common.sh@10 -- # set +x 00:07:25.657 ************************************ 00:07:25.657 START TEST nvmf_example 00:07:25.657 ************************************ 00:07:25.657 16:27:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:25.657 * Looking for test storage... 00:07:25.657 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:25.657 16:27:03 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:25.657 16:27:03 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:25.657 16:27:03 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:25.657 16:27:03 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:25.657 16:27:03 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:25.657 16:27:03 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:25.657 16:27:03 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:25.657 16:27:03 -- scripts/common.sh@335 -- # IFS=.-: 00:07:25.657 16:27:03 -- scripts/common.sh@335 -- # read -ra ver1 00:07:25.657 16:27:03 -- scripts/common.sh@336 -- # IFS=.-: 00:07:25.657 16:27:03 -- scripts/common.sh@336 -- # read -ra ver2 00:07:25.657 16:27:03 -- scripts/common.sh@337 -- # local 'op=<' 00:07:25.657 16:27:03 -- scripts/common.sh@339 -- # ver1_l=2 00:07:25.657 16:27:03 -- scripts/common.sh@340 -- # ver2_l=1 00:07:25.657 16:27:03 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:25.657 16:27:03 -- scripts/common.sh@343 -- # case "$op" in 00:07:25.657 16:27:03 -- scripts/common.sh@344 -- # : 1 00:07:25.657 16:27:03 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:25.657 16:27:03 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:25.657 16:27:03 -- scripts/common.sh@364 -- # decimal 1 00:07:25.657 16:27:03 -- scripts/common.sh@352 -- # local d=1 00:07:25.657 16:27:03 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:25.657 16:27:03 -- scripts/common.sh@354 -- # echo 1 00:07:25.657 16:27:03 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:25.657 16:27:03 -- scripts/common.sh@365 -- # decimal 2 00:07:25.657 16:27:03 -- scripts/common.sh@352 -- # local d=2 00:07:25.657 16:27:03 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:25.657 16:27:03 -- scripts/common.sh@354 -- # echo 2 00:07:25.657 16:27:03 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:25.657 16:27:03 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:25.657 16:27:03 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:25.657 16:27:03 -- scripts/common.sh@367 -- # return 0 00:07:25.657 16:27:03 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:25.657 16:27:03 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:25.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.657 --rc genhtml_branch_coverage=1 00:07:25.657 --rc genhtml_function_coverage=1 00:07:25.657 --rc genhtml_legend=1 00:07:25.657 --rc geninfo_all_blocks=1 00:07:25.657 --rc geninfo_unexecuted_blocks=1 00:07:25.657 00:07:25.657 ' 00:07:25.657 16:27:03 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:25.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.657 --rc genhtml_branch_coverage=1 00:07:25.657 --rc genhtml_function_coverage=1 00:07:25.657 --rc genhtml_legend=1 00:07:25.657 --rc geninfo_all_blocks=1 00:07:25.657 --rc geninfo_unexecuted_blocks=1 00:07:25.657 00:07:25.657 ' 00:07:25.657 16:27:03 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:25.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.657 --rc genhtml_branch_coverage=1 00:07:25.657 --rc genhtml_function_coverage=1 00:07:25.657 --rc genhtml_legend=1 00:07:25.657 --rc geninfo_all_blocks=1 00:07:25.657 --rc geninfo_unexecuted_blocks=1 00:07:25.657 00:07:25.657 ' 00:07:25.657 16:27:03 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:25.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.657 --rc genhtml_branch_coverage=1 00:07:25.657 --rc genhtml_function_coverage=1 00:07:25.657 --rc genhtml_legend=1 00:07:25.657 --rc geninfo_all_blocks=1 00:07:25.657 --rc geninfo_unexecuted_blocks=1 00:07:25.657 00:07:25.657 ' 00:07:25.657 16:27:03 -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:25.657 16:27:03 -- nvmf/common.sh@7 -- # uname -s 00:07:25.657 16:27:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:25.657 16:27:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:25.657 16:27:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:25.657 16:27:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:25.657 16:27:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:25.657 16:27:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:25.657 16:27:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:25.657 16:27:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:25.657 16:27:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:25.658 16:27:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:25.658 16:27:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:07:25.658 16:27:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:07:25.658 16:27:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:25.658 16:27:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:25.658 16:27:03 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:25.658 16:27:03 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:25.658 16:27:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:25.658 16:27:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:25.658 16:27:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:25.658 16:27:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.658 16:27:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.658 16:27:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.658 16:27:03 -- paths/export.sh@5 -- # export PATH 00:07:25.658 16:27:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.658 16:27:03 -- nvmf/common.sh@46 -- # : 0 00:07:25.658 16:27:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:25.658 16:27:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:25.658 16:27:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:25.658 16:27:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:25.658 16:27:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:25.658 16:27:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:25.658 16:27:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:25.658 16:27:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:25.658 16:27:03 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:25.658 16:27:03 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:25.658 16:27:03 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:25.658 16:27:03 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:25.658 16:27:03 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:25.658 16:27:03 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:25.658 16:27:03 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:25.658 16:27:03 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:25.658 16:27:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:25.658 16:27:03 -- common/autotest_common.sh@10 -- # set +x 00:07:25.658 16:27:03 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:25.658 16:27:03 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:25.658 16:27:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:25.658 16:27:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:25.658 16:27:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:25.658 16:27:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:25.658 16:27:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:25.658 16:27:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:25.658 16:27:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:25.917 16:27:03 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:07:25.917 16:27:03 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:07:25.917 16:27:03 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:07:25.917 16:27:03 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:07:25.917 16:27:03 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:07:25.917 16:27:03 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:07:25.917 16:27:03 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:25.917 16:27:03 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:25.917 16:27:03 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:25.917 16:27:03 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:07:25.917 16:27:03 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:25.917 16:27:03 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:25.917 16:27:03 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:25.917 16:27:03 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:25.917 16:27:03 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:25.917 16:27:03 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:25.917 16:27:03 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:25.917 16:27:03 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:25.917 16:27:03 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:07:25.917 Cannot find device "nvmf_init_br" 00:07:25.917 16:27:03 -- nvmf/common.sh@153 -- # true 00:07:25.917 16:27:03 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:07:25.917 Cannot find device "nvmf_tgt_br" 00:07:25.918 16:27:03 -- nvmf/common.sh@154 -- # true 00:07:25.918 16:27:03 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:07:25.918 Cannot find device "nvmf_tgt_br2" 00:07:25.918 16:27:03 -- nvmf/common.sh@155 -- # true 00:07:25.918 16:27:03 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:07:25.918 Cannot find device "nvmf_init_br" 00:07:25.918 16:27:03 -- nvmf/common.sh@156 -- # true 00:07:25.918 16:27:03 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:07:25.918 Cannot find device "nvmf_tgt_br" 00:07:25.918 16:27:03 -- nvmf/common.sh@157 -- # true 00:07:25.918 16:27:03 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:07:25.918 Cannot find device "nvmf_tgt_br2" 00:07:25.918 16:27:03 -- nvmf/common.sh@158 -- # true 00:07:25.918 16:27:03 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:07:25.918 Cannot find device "nvmf_br" 00:07:25.918 16:27:03 -- nvmf/common.sh@159 -- # true 00:07:25.918 16:27:03 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:07:25.918 Cannot find device "nvmf_init_if" 00:07:25.918 16:27:03 -- nvmf/common.sh@160 -- # true 00:07:25.918 16:27:03 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:25.918 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:25.918 16:27:03 -- nvmf/common.sh@161 -- # true 00:07:25.918 16:27:03 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:25.918 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:25.918 16:27:03 -- nvmf/common.sh@162 -- # true 00:07:25.918 16:27:03 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:07:25.918 16:27:03 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:25.918 16:27:03 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:25.918 16:27:03 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:25.918 16:27:03 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:25.918 16:27:03 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:25.918 16:27:03 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:25.918 16:27:03 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:25.918 16:27:03 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:25.918 16:27:03 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:07:25.918 16:27:03 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:07:25.918 16:27:03 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:07:25.918 16:27:03 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:07:25.918 16:27:03 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:25.918 16:27:03 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:25.918 16:27:03 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:25.918 16:27:03 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:07:26.177 16:27:03 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:07:26.177 16:27:03 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:07:26.177 16:27:03 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:26.177 16:27:03 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:26.177 16:27:03 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:26.177 16:27:03 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:26.177 16:27:03 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:07:26.177 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:26.177 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:07:26.177 00:07:26.177 --- 10.0.0.2 ping statistics --- 00:07:26.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:26.177 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:07:26.177 16:27:03 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:07:26.177 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:26.177 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:07:26.177 00:07:26.177 --- 10.0.0.3 ping statistics --- 00:07:26.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:26.177 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:07:26.177 16:27:03 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:26.177 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:26.177 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:07:26.177 00:07:26.177 --- 10.0.0.1 ping statistics --- 00:07:26.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:26.177 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:07:26.177 16:27:03 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:26.177 16:27:03 -- nvmf/common.sh@421 -- # return 0 00:07:26.177 16:27:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:26.177 16:27:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:26.177 16:27:03 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:26.177 16:27:03 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:26.177 16:27:03 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:26.177 16:27:03 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:26.177 16:27:03 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:26.177 16:27:03 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:26.177 16:27:03 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:26.177 16:27:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:26.177 16:27:03 -- common/autotest_common.sh@10 -- # set +x 00:07:26.177 16:27:03 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:26.177 16:27:03 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:26.177 16:27:03 -- target/nvmf_example.sh@34 -- # nvmfpid=72136 00:07:26.177 16:27:03 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:26.177 16:27:03 -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:26.177 16:27:03 -- target/nvmf_example.sh@36 -- # waitforlisten 72136 00:07:26.177 16:27:03 -- common/autotest_common.sh@829 -- # '[' -z 72136 ']' 00:07:26.177 16:27:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.177 16:27:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:26.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.177 16:27:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.177 16:27:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:26.177 16:27:03 -- common/autotest_common.sh@10 -- # set +x 00:07:27.115 16:27:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:27.115 16:27:04 -- common/autotest_common.sh@862 -- # return 0 00:07:27.115 16:27:04 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:27.115 16:27:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:27.115 16:27:04 -- common/autotest_common.sh@10 -- # set +x 00:07:27.374 16:27:04 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:27.374 16:27:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.374 16:27:04 -- common/autotest_common.sh@10 -- # set +x 00:07:27.374 16:27:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.374 16:27:04 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:27.374 16:27:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.374 16:27:04 -- common/autotest_common.sh@10 -- # set +x 00:07:27.374 16:27:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.374 16:27:04 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:27.374 16:27:04 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:27.374 16:27:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.374 16:27:04 -- common/autotest_common.sh@10 -- # set +x 00:07:27.374 16:27:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.374 16:27:04 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:27.374 16:27:04 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:27.374 16:27:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.374 16:27:04 -- common/autotest_common.sh@10 -- # set +x 00:07:27.374 16:27:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.374 16:27:04 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:27.374 16:27:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.374 16:27:04 -- common/autotest_common.sh@10 -- # set +x 00:07:27.374 16:27:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.374 16:27:04 -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:07:27.374 16:27:04 -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:39.584 Initializing NVMe Controllers 00:07:39.584 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:39.584 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:39.584 Initialization complete. Launching workers. 00:07:39.584 ======================================================== 00:07:39.584 Latency(us) 00:07:39.584 Device Information : IOPS MiB/s Average min max 00:07:39.584 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16938.91 66.17 3777.82 502.54 23978.05 00:07:39.584 ======================================================== 00:07:39.584 Total : 16938.91 66.17 3777.82 502.54 23978.05 00:07:39.584 00:07:39.584 16:27:14 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:39.584 16:27:14 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:39.584 16:27:14 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:39.584 16:27:14 -- nvmf/common.sh@116 -- # sync 00:07:39.584 16:27:15 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:39.584 16:27:15 -- nvmf/common.sh@119 -- # set +e 00:07:39.584 16:27:15 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:39.584 16:27:15 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:39.584 rmmod nvme_tcp 00:07:39.584 rmmod nvme_fabrics 00:07:39.584 rmmod nvme_keyring 00:07:39.584 16:27:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:39.584 16:27:15 -- nvmf/common.sh@123 -- # set -e 00:07:39.584 16:27:15 -- nvmf/common.sh@124 -- # return 0 00:07:39.584 16:27:15 -- nvmf/common.sh@477 -- # '[' -n 72136 ']' 00:07:39.584 16:27:15 -- nvmf/common.sh@478 -- # killprocess 72136 00:07:39.584 16:27:15 -- common/autotest_common.sh@936 -- # '[' -z 72136 ']' 00:07:39.584 16:27:15 -- common/autotest_common.sh@940 -- # kill -0 72136 00:07:39.584 16:27:15 -- common/autotest_common.sh@941 -- # uname 00:07:39.584 16:27:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:39.584 16:27:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72136 00:07:39.584 16:27:15 -- common/autotest_common.sh@942 -- # process_name=nvmf 00:07:39.584 16:27:15 -- common/autotest_common.sh@946 -- # '[' nvmf = sudo ']' 00:07:39.584 killing process with pid 72136 00:07:39.584 16:27:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72136' 00:07:39.584 16:27:15 -- common/autotest_common.sh@955 -- # kill 72136 00:07:39.584 16:27:15 -- common/autotest_common.sh@960 -- # wait 72136 00:07:39.584 nvmf threads initialize successfully 00:07:39.584 bdev subsystem init successfully 00:07:39.584 created a nvmf target service 00:07:39.584 create targets's poll groups done 00:07:39.584 all subsystems of target started 00:07:39.584 nvmf target is running 00:07:39.584 all subsystems of target stopped 00:07:39.584 destroy targets's poll groups done 00:07:39.584 destroyed the nvmf target service 00:07:39.584 bdev subsystem finish successfully 00:07:39.584 nvmf threads destroy successfully 00:07:39.584 16:27:15 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:39.584 16:27:15 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:39.584 16:27:15 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:39.584 16:27:15 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:39.584 16:27:15 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:39.584 16:27:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:39.584 16:27:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:39.584 16:27:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:39.584 16:27:15 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:07:39.584 16:27:15 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:39.584 16:27:15 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:39.585 16:27:15 -- common/autotest_common.sh@10 -- # set +x 00:07:39.585 00:07:39.585 real 0m12.500s 00:07:39.585 user 0m44.542s 00:07:39.585 sys 0m2.135s 00:07:39.585 16:27:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:39.585 16:27:15 -- common/autotest_common.sh@10 -- # set +x 00:07:39.585 ************************************ 00:07:39.585 END TEST nvmf_example 00:07:39.585 ************************************ 00:07:39.585 16:27:15 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:39.585 16:27:15 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:39.585 16:27:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:39.585 16:27:15 -- common/autotest_common.sh@10 -- # set +x 00:07:39.585 ************************************ 00:07:39.585 START TEST nvmf_filesystem 00:07:39.585 ************************************ 00:07:39.585 16:27:15 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:39.585 * Looking for test storage... 00:07:39.585 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:39.585 16:27:15 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:39.585 16:27:15 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:39.585 16:27:15 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:39.585 16:27:15 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:39.585 16:27:15 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:39.585 16:27:15 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:39.585 16:27:15 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:39.585 16:27:15 -- scripts/common.sh@335 -- # IFS=.-: 00:07:39.585 16:27:15 -- scripts/common.sh@335 -- # read -ra ver1 00:07:39.585 16:27:15 -- scripts/common.sh@336 -- # IFS=.-: 00:07:39.585 16:27:15 -- scripts/common.sh@336 -- # read -ra ver2 00:07:39.585 16:27:15 -- scripts/common.sh@337 -- # local 'op=<' 00:07:39.585 16:27:15 -- scripts/common.sh@339 -- # ver1_l=2 00:07:39.585 16:27:15 -- scripts/common.sh@340 -- # ver2_l=1 00:07:39.585 16:27:15 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:39.585 16:27:15 -- scripts/common.sh@343 -- # case "$op" in 00:07:39.585 16:27:15 -- scripts/common.sh@344 -- # : 1 00:07:39.585 16:27:15 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:39.585 16:27:15 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:39.585 16:27:15 -- scripts/common.sh@364 -- # decimal 1 00:07:39.585 16:27:15 -- scripts/common.sh@352 -- # local d=1 00:07:39.585 16:27:15 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:39.585 16:27:15 -- scripts/common.sh@354 -- # echo 1 00:07:39.585 16:27:15 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:39.585 16:27:15 -- scripts/common.sh@365 -- # decimal 2 00:07:39.585 16:27:15 -- scripts/common.sh@352 -- # local d=2 00:07:39.585 16:27:15 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:39.585 16:27:15 -- scripts/common.sh@354 -- # echo 2 00:07:39.585 16:27:15 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:39.585 16:27:15 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:39.585 16:27:15 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:39.585 16:27:15 -- scripts/common.sh@367 -- # return 0 00:07:39.585 16:27:15 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:39.585 16:27:15 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:39.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.585 --rc genhtml_branch_coverage=1 00:07:39.585 --rc genhtml_function_coverage=1 00:07:39.585 --rc genhtml_legend=1 00:07:39.585 --rc geninfo_all_blocks=1 00:07:39.585 --rc geninfo_unexecuted_blocks=1 00:07:39.585 00:07:39.585 ' 00:07:39.585 16:27:15 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:39.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.585 --rc genhtml_branch_coverage=1 00:07:39.585 --rc genhtml_function_coverage=1 00:07:39.585 --rc genhtml_legend=1 00:07:39.585 --rc geninfo_all_blocks=1 00:07:39.585 --rc geninfo_unexecuted_blocks=1 00:07:39.585 00:07:39.585 ' 00:07:39.585 16:27:15 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:39.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.585 --rc genhtml_branch_coverage=1 00:07:39.585 --rc genhtml_function_coverage=1 00:07:39.585 --rc genhtml_legend=1 00:07:39.585 --rc geninfo_all_blocks=1 00:07:39.585 --rc geninfo_unexecuted_blocks=1 00:07:39.585 00:07:39.585 ' 00:07:39.585 16:27:15 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:39.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.585 --rc genhtml_branch_coverage=1 00:07:39.585 --rc genhtml_function_coverage=1 00:07:39.585 --rc genhtml_legend=1 00:07:39.585 --rc geninfo_all_blocks=1 00:07:39.585 --rc geninfo_unexecuted_blocks=1 00:07:39.585 00:07:39.585 ' 00:07:39.585 16:27:15 -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:07:39.585 16:27:15 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:39.585 16:27:15 -- common/autotest_common.sh@34 -- # set -e 00:07:39.585 16:27:15 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:39.585 16:27:15 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:39.585 16:27:15 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:07:39.585 16:27:15 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:07:39.585 16:27:15 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:39.585 16:27:15 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:39.585 16:27:15 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:39.585 16:27:15 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:39.585 16:27:15 -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:07:39.585 16:27:15 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:39.585 16:27:15 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:39.585 16:27:15 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:39.585 16:27:15 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:39.585 16:27:15 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:39.585 16:27:15 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:39.585 16:27:15 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:39.585 16:27:15 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:39.585 16:27:15 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:39.585 16:27:15 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:39.585 16:27:15 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:39.585 16:27:15 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:39.585 16:27:15 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:39.585 16:27:15 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:39.585 16:27:15 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:39.585 16:27:15 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:39.585 16:27:15 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:39.585 16:27:15 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:39.585 16:27:15 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:39.585 16:27:15 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:39.585 16:27:15 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:39.585 16:27:15 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:39.585 16:27:15 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:39.585 16:27:15 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:39.585 16:27:15 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:39.585 16:27:15 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:39.585 16:27:15 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:39.585 16:27:15 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:39.585 16:27:15 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:39.585 16:27:15 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:39.585 16:27:15 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:07:39.585 16:27:15 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:39.585 16:27:15 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:39.585 16:27:15 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:39.585 16:27:15 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:39.585 16:27:15 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:07:39.585 16:27:15 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:39.585 16:27:15 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:39.585 16:27:15 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:39.585 16:27:15 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:39.585 16:27:15 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:07:39.585 16:27:15 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:07:39.585 16:27:15 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:39.585 16:27:15 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:07:39.585 16:27:15 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:07:39.585 16:27:15 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:07:39.585 16:27:15 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:07:39.585 16:27:15 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:07:39.585 16:27:15 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:07:39.585 16:27:15 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:07:39.585 16:27:15 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:07:39.585 16:27:15 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:07:39.585 16:27:15 -- common/build_config.sh@58 -- # CONFIG_GOLANG=y 00:07:39.585 16:27:15 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:07:39.585 16:27:15 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:07:39.585 16:27:15 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:07:39.585 16:27:15 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:07:39.585 16:27:15 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:07:39.585 16:27:15 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:07:39.585 16:27:15 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:07:39.585 16:27:15 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:39.585 16:27:15 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:07:39.586 16:27:15 -- common/build_config.sh@68 -- # CONFIG_AVAHI=y 00:07:39.586 16:27:15 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:07:39.586 16:27:15 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:07:39.586 16:27:15 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:07:39.586 16:27:15 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:07:39.586 16:27:15 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:07:39.586 16:27:15 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:07:39.586 16:27:15 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:07:39.586 16:27:15 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:07:39.586 16:27:15 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:39.586 16:27:15 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:07:39.586 16:27:15 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:07:39.586 16:27:15 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:07:39.586 16:27:15 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:07:39.586 16:27:15 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:07:39.586 16:27:15 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:07:39.586 16:27:15 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:07:39.586 16:27:15 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:07:39.586 16:27:15 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:07:39.586 16:27:15 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:07:39.586 16:27:15 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:39.586 16:27:15 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:39.586 16:27:15 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:39.586 16:27:15 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:39.586 16:27:15 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:39.586 16:27:15 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:39.586 16:27:15 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:07:39.586 16:27:15 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:39.586 #define SPDK_CONFIG_H 00:07:39.586 #define SPDK_CONFIG_APPS 1 00:07:39.586 #define SPDK_CONFIG_ARCH native 00:07:39.586 #undef SPDK_CONFIG_ASAN 00:07:39.586 #define SPDK_CONFIG_AVAHI 1 00:07:39.586 #undef SPDK_CONFIG_CET 00:07:39.586 #define SPDK_CONFIG_COVERAGE 1 00:07:39.586 #define SPDK_CONFIG_CROSS_PREFIX 00:07:39.586 #undef SPDK_CONFIG_CRYPTO 00:07:39.586 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:39.586 #undef SPDK_CONFIG_CUSTOMOCF 00:07:39.586 #undef SPDK_CONFIG_DAOS 00:07:39.586 #define SPDK_CONFIG_DAOS_DIR 00:07:39.586 #define SPDK_CONFIG_DEBUG 1 00:07:39.586 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:39.586 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:07:39.586 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:07:39.586 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:07:39.586 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:39.586 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:39.586 #define SPDK_CONFIG_EXAMPLES 1 00:07:39.586 #undef SPDK_CONFIG_FC 00:07:39.586 #define SPDK_CONFIG_FC_PATH 00:07:39.586 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:39.586 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:39.586 #undef SPDK_CONFIG_FUSE 00:07:39.586 #undef SPDK_CONFIG_FUZZER 00:07:39.586 #define SPDK_CONFIG_FUZZER_LIB 00:07:39.586 #define SPDK_CONFIG_GOLANG 1 00:07:39.586 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:39.586 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:39.586 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:39.586 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:39.586 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:39.586 #define SPDK_CONFIG_IDXD 1 00:07:39.586 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:39.586 #undef SPDK_CONFIG_IPSEC_MB 00:07:39.586 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:39.586 #define SPDK_CONFIG_ISAL 1 00:07:39.586 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:39.586 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:39.586 #define SPDK_CONFIG_LIBDIR 00:07:39.586 #undef SPDK_CONFIG_LTO 00:07:39.586 #define SPDK_CONFIG_MAX_LCORES 00:07:39.586 #define SPDK_CONFIG_NVME_CUSE 1 00:07:39.586 #undef SPDK_CONFIG_OCF 00:07:39.586 #define SPDK_CONFIG_OCF_PATH 00:07:39.586 #define SPDK_CONFIG_OPENSSL_PATH 00:07:39.586 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:39.586 #undef SPDK_CONFIG_PGO_USE 00:07:39.586 #define SPDK_CONFIG_PREFIX /usr/local 00:07:39.586 #undef SPDK_CONFIG_RAID5F 00:07:39.586 #undef SPDK_CONFIG_RBD 00:07:39.586 #define SPDK_CONFIG_RDMA 1 00:07:39.586 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:39.586 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:39.586 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:39.586 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:39.586 #define SPDK_CONFIG_SHARED 1 00:07:39.586 #undef SPDK_CONFIG_SMA 00:07:39.586 #define SPDK_CONFIG_TESTS 1 00:07:39.586 #undef SPDK_CONFIG_TSAN 00:07:39.586 #define SPDK_CONFIG_UBLK 1 00:07:39.586 #define SPDK_CONFIG_UBSAN 1 00:07:39.586 #undef SPDK_CONFIG_UNIT_TESTS 00:07:39.586 #undef SPDK_CONFIG_URING 00:07:39.586 #define SPDK_CONFIG_URING_PATH 00:07:39.586 #undef SPDK_CONFIG_URING_ZNS 00:07:39.586 #define SPDK_CONFIG_USDT 1 00:07:39.586 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:39.586 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:39.586 #undef SPDK_CONFIG_VFIO_USER 00:07:39.586 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:39.586 #define SPDK_CONFIG_VHOST 1 00:07:39.586 #define SPDK_CONFIG_VIRTIO 1 00:07:39.586 #undef SPDK_CONFIG_VTUNE 00:07:39.586 #define SPDK_CONFIG_VTUNE_DIR 00:07:39.586 #define SPDK_CONFIG_WERROR 1 00:07:39.586 #define SPDK_CONFIG_WPDK_DIR 00:07:39.586 #undef SPDK_CONFIG_XNVME 00:07:39.586 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:39.586 16:27:15 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:39.586 16:27:15 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:39.586 16:27:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:39.586 16:27:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:39.586 16:27:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:39.586 16:27:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.586 16:27:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.586 16:27:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.586 16:27:15 -- paths/export.sh@5 -- # export PATH 00:07:39.586 16:27:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.586 16:27:15 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:07:39.586 16:27:15 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:07:39.586 16:27:15 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:07:39.586 16:27:15 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:07:39.586 16:27:15 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:07:39.586 16:27:15 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:07:39.586 16:27:15 -- pm/common@16 -- # TEST_TAG=N/A 00:07:39.586 16:27:15 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:07:39.586 16:27:15 -- common/autotest_common.sh@52 -- # : 1 00:07:39.586 16:27:15 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:07:39.586 16:27:15 -- common/autotest_common.sh@56 -- # : 0 00:07:39.586 16:27:15 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:39.586 16:27:15 -- common/autotest_common.sh@58 -- # : 0 00:07:39.586 16:27:15 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:07:39.586 16:27:15 -- common/autotest_common.sh@60 -- # : 1 00:07:39.586 16:27:15 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:39.586 16:27:15 -- common/autotest_common.sh@62 -- # : 0 00:07:39.586 16:27:15 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:07:39.586 16:27:15 -- common/autotest_common.sh@64 -- # : 00:07:39.586 16:27:15 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:07:39.586 16:27:15 -- common/autotest_common.sh@66 -- # : 0 00:07:39.586 16:27:15 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:07:39.586 16:27:15 -- common/autotest_common.sh@68 -- # : 0 00:07:39.586 16:27:15 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:07:39.586 16:27:15 -- common/autotest_common.sh@70 -- # : 0 00:07:39.586 16:27:15 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:07:39.586 16:27:15 -- common/autotest_common.sh@72 -- # : 0 00:07:39.586 16:27:15 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:39.586 16:27:15 -- common/autotest_common.sh@74 -- # : 0 00:07:39.586 16:27:15 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:07:39.586 16:27:15 -- common/autotest_common.sh@76 -- # : 0 00:07:39.587 16:27:15 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:07:39.587 16:27:15 -- common/autotest_common.sh@78 -- # : 0 00:07:39.587 16:27:15 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:07:39.587 16:27:15 -- common/autotest_common.sh@80 -- # : 0 00:07:39.587 16:27:15 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:07:39.587 16:27:15 -- common/autotest_common.sh@82 -- # : 0 00:07:39.587 16:27:15 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:07:39.587 16:27:15 -- common/autotest_common.sh@84 -- # : 0 00:07:39.587 16:27:15 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:07:39.587 16:27:15 -- common/autotest_common.sh@86 -- # : 1 00:07:39.587 16:27:15 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:07:39.587 16:27:15 -- common/autotest_common.sh@88 -- # : 0 00:07:39.587 16:27:15 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:07:39.587 16:27:15 -- common/autotest_common.sh@90 -- # : 0 00:07:39.587 16:27:15 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:39.587 16:27:15 -- common/autotest_common.sh@92 -- # : 0 00:07:39.587 16:27:15 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:07:39.587 16:27:15 -- common/autotest_common.sh@94 -- # : 0 00:07:39.587 16:27:15 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:07:39.587 16:27:15 -- common/autotest_common.sh@96 -- # : tcp 00:07:39.587 16:27:15 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:39.587 16:27:15 -- common/autotest_common.sh@98 -- # : 0 00:07:39.587 16:27:15 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:07:39.587 16:27:15 -- common/autotest_common.sh@100 -- # : 0 00:07:39.587 16:27:15 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:07:39.587 16:27:15 -- common/autotest_common.sh@102 -- # : 0 00:07:39.587 16:27:15 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:07:39.587 16:27:15 -- common/autotest_common.sh@104 -- # : 0 00:07:39.587 16:27:15 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:07:39.587 16:27:15 -- common/autotest_common.sh@106 -- # : 0 00:07:39.587 16:27:15 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:07:39.587 16:27:15 -- common/autotest_common.sh@108 -- # : 0 00:07:39.587 16:27:15 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:07:39.587 16:27:15 -- common/autotest_common.sh@110 -- # : 0 00:07:39.587 16:27:15 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:07:39.587 16:27:15 -- common/autotest_common.sh@112 -- # : 0 00:07:39.587 16:27:15 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:39.587 16:27:15 -- common/autotest_common.sh@114 -- # : 0 00:07:39.587 16:27:15 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:07:39.587 16:27:15 -- common/autotest_common.sh@116 -- # : 1 00:07:39.587 16:27:15 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:07:39.587 16:27:15 -- common/autotest_common.sh@118 -- # : /home/vagrant/spdk_repo/dpdk/build 00:07:39.587 16:27:15 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:39.587 16:27:15 -- common/autotest_common.sh@120 -- # : 0 00:07:39.587 16:27:15 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:07:39.587 16:27:15 -- common/autotest_common.sh@122 -- # : 0 00:07:39.587 16:27:15 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:07:39.587 16:27:15 -- common/autotest_common.sh@124 -- # : 0 00:07:39.587 16:27:15 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:07:39.587 16:27:15 -- common/autotest_common.sh@126 -- # : 0 00:07:39.587 16:27:15 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:07:39.587 16:27:15 -- common/autotest_common.sh@128 -- # : 0 00:07:39.587 16:27:15 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:07:39.587 16:27:15 -- common/autotest_common.sh@130 -- # : 0 00:07:39.587 16:27:15 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:07:39.587 16:27:15 -- common/autotest_common.sh@132 -- # : v23.11 00:07:39.587 16:27:15 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:07:39.587 16:27:15 -- common/autotest_common.sh@134 -- # : true 00:07:39.587 16:27:15 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:07:39.587 16:27:15 -- common/autotest_common.sh@136 -- # : 0 00:07:39.587 16:27:15 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:07:39.587 16:27:15 -- common/autotest_common.sh@138 -- # : 0 00:07:39.587 16:27:15 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:07:39.587 16:27:15 -- common/autotest_common.sh@140 -- # : 1 00:07:39.587 16:27:15 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:07:39.587 16:27:15 -- common/autotest_common.sh@142 -- # : 0 00:07:39.587 16:27:15 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:07:39.587 16:27:15 -- common/autotest_common.sh@144 -- # : 0 00:07:39.587 16:27:15 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:07:39.587 16:27:15 -- common/autotest_common.sh@146 -- # : 0 00:07:39.587 16:27:15 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:07:39.587 16:27:15 -- common/autotest_common.sh@148 -- # : 00:07:39.587 16:27:15 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:07:39.587 16:27:15 -- common/autotest_common.sh@150 -- # : 0 00:07:39.587 16:27:15 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:07:39.587 16:27:15 -- common/autotest_common.sh@152 -- # : 0 00:07:39.587 16:27:15 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:07:39.587 16:27:15 -- common/autotest_common.sh@154 -- # : 0 00:07:39.587 16:27:15 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:07:39.587 16:27:15 -- common/autotest_common.sh@156 -- # : 0 00:07:39.587 16:27:15 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:07:39.587 16:27:15 -- common/autotest_common.sh@158 -- # : 0 00:07:39.587 16:27:15 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:07:39.587 16:27:15 -- common/autotest_common.sh@160 -- # : 0 00:07:39.587 16:27:15 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:07:39.587 16:27:15 -- common/autotest_common.sh@163 -- # : 00:07:39.587 16:27:15 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:07:39.587 16:27:15 -- common/autotest_common.sh@165 -- # : 1 00:07:39.587 16:27:15 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:07:39.587 16:27:15 -- common/autotest_common.sh@167 -- # : 1 00:07:39.587 16:27:15 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:39.587 16:27:15 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:07:39.587 16:27:15 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:07:39.587 16:27:15 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:07:39.587 16:27:15 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:07:39.587 16:27:15 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:39.587 16:27:15 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:39.587 16:27:15 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:39.587 16:27:15 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:39.587 16:27:15 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:39.587 16:27:15 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:39.587 16:27:15 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:39.587 16:27:15 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:39.587 16:27:15 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:39.587 16:27:15 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:07:39.587 16:27:15 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:39.587 16:27:15 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:39.587 16:27:15 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:39.587 16:27:15 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:39.587 16:27:15 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:39.587 16:27:15 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:07:39.587 16:27:15 -- common/autotest_common.sh@196 -- # cat 00:07:39.587 16:27:15 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:07:39.587 16:27:15 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:39.587 16:27:15 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:39.587 16:27:15 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:39.587 16:27:15 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:39.587 16:27:15 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:07:39.587 16:27:15 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:07:39.587 16:27:15 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:07:39.587 16:27:15 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:07:39.587 16:27:15 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:07:39.588 16:27:15 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:07:39.588 16:27:15 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:39.588 16:27:15 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:39.588 16:27:15 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:39.588 16:27:15 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:39.588 16:27:15 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:07:39.588 16:27:15 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:07:39.588 16:27:15 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:39.588 16:27:15 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:39.588 16:27:15 -- common/autotest_common.sh@247 -- # _LCOV_MAIN=0 00:07:39.588 16:27:15 -- common/autotest_common.sh@248 -- # _LCOV_LLVM=1 00:07:39.588 16:27:15 -- common/autotest_common.sh@249 -- # _LCOV= 00:07:39.588 16:27:15 -- common/autotest_common.sh@250 -- # [[ '' == *clang* ]] 00:07:39.588 16:27:15 -- common/autotest_common.sh@250 -- # [[ 0 -eq 1 ]] 00:07:39.588 16:27:15 -- common/autotest_common.sh@252 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:07:39.588 16:27:15 -- common/autotest_common.sh@253 -- # _lcov_opt[_LCOV_MAIN]= 00:07:39.588 16:27:15 -- common/autotest_common.sh@255 -- # lcov_opt= 00:07:39.588 16:27:15 -- common/autotest_common.sh@258 -- # '[' 0 -eq 0 ']' 00:07:39.588 16:27:15 -- common/autotest_common.sh@259 -- # export valgrind= 00:07:39.588 16:27:15 -- common/autotest_common.sh@259 -- # valgrind= 00:07:39.588 16:27:15 -- common/autotest_common.sh@265 -- # uname -s 00:07:39.588 16:27:15 -- common/autotest_common.sh@265 -- # '[' Linux = Linux ']' 00:07:39.588 16:27:15 -- common/autotest_common.sh@266 -- # HUGEMEM=4096 00:07:39.588 16:27:15 -- common/autotest_common.sh@267 -- # export CLEAR_HUGE=yes 00:07:39.588 16:27:15 -- common/autotest_common.sh@267 -- # CLEAR_HUGE=yes 00:07:39.588 16:27:15 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:07:39.588 16:27:15 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:07:39.588 16:27:15 -- common/autotest_common.sh@275 -- # MAKE=make 00:07:39.588 16:27:15 -- common/autotest_common.sh@276 -- # MAKEFLAGS=-j10 00:07:39.588 16:27:15 -- common/autotest_common.sh@292 -- # export HUGEMEM=4096 00:07:39.588 16:27:15 -- common/autotest_common.sh@292 -- # HUGEMEM=4096 00:07:39.588 16:27:15 -- common/autotest_common.sh@294 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:07:39.588 16:27:15 -- common/autotest_common.sh@299 -- # NO_HUGE=() 00:07:39.588 16:27:15 -- common/autotest_common.sh@300 -- # TEST_MODE= 00:07:39.588 16:27:15 -- common/autotest_common.sh@301 -- # for i in "$@" 00:07:39.588 16:27:15 -- common/autotest_common.sh@302 -- # case "$i" in 00:07:39.588 16:27:15 -- common/autotest_common.sh@307 -- # TEST_TRANSPORT=tcp 00:07:39.588 16:27:15 -- common/autotest_common.sh@319 -- # [[ -z 72388 ]] 00:07:39.588 16:27:15 -- common/autotest_common.sh@319 -- # kill -0 72388 00:07:39.588 16:27:15 -- common/autotest_common.sh@1675 -- # set_test_storage 2147483648 00:07:39.588 16:27:15 -- common/autotest_common.sh@329 -- # [[ -v testdir ]] 00:07:39.588 16:27:15 -- common/autotest_common.sh@331 -- # local requested_size=2147483648 00:07:39.588 16:27:15 -- common/autotest_common.sh@332 -- # local mount target_dir 00:07:39.588 16:27:15 -- common/autotest_common.sh@334 -- # local -A mounts fss sizes avails uses 00:07:39.588 16:27:15 -- common/autotest_common.sh@335 -- # local source fs size avail mount use 00:07:39.588 16:27:15 -- common/autotest_common.sh@337 -- # local storage_fallback storage_candidates 00:07:39.588 16:27:15 -- common/autotest_common.sh@339 -- # mktemp -udt spdk.XXXXXX 00:07:39.588 16:27:15 -- common/autotest_common.sh@339 -- # storage_fallback=/tmp/spdk.kj2yw7 00:07:39.588 16:27:15 -- common/autotest_common.sh@344 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:39.588 16:27:15 -- common/autotest_common.sh@346 -- # [[ -n '' ]] 00:07:39.588 16:27:15 -- common/autotest_common.sh@351 -- # [[ -n '' ]] 00:07:39.588 16:27:15 -- common/autotest_common.sh@356 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.kj2yw7/tests/target /tmp/spdk.kj2yw7 00:07:39.588 16:27:15 -- common/autotest_common.sh@359 -- # requested_size=2214592512 00:07:39.588 16:27:15 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:39.588 16:27:15 -- common/autotest_common.sh@328 -- # df -T 00:07:39.588 16:27:15 -- common/autotest_common.sh@328 -- # grep -v Filesystem 00:07:39.588 16:27:15 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda5 00:07:39.588 16:27:15 -- common/autotest_common.sh@362 -- # fss["$mount"]=btrfs 00:07:39.588 16:27:15 -- common/autotest_common.sh@363 -- # avails["$mount"]=13293748224 00:07:39.588 16:27:15 -- common/autotest_common.sh@363 -- # sizes["$mount"]=20314062848 00:07:39.588 16:27:15 -- common/autotest_common.sh@364 -- # uses["$mount"]=6289866752 00:07:39.588 16:27:15 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:39.588 16:27:15 -- common/autotest_common.sh@362 -- # mounts["$mount"]=devtmpfs 00:07:39.588 16:27:15 -- common/autotest_common.sh@362 -- # fss["$mount"]=devtmpfs 00:07:39.588 16:27:15 -- common/autotest_common.sh@363 -- # avails["$mount"]=4194304 00:07:39.588 16:27:15 -- common/autotest_common.sh@363 -- # sizes["$mount"]=4194304 00:07:39.588 16:27:15 -- common/autotest_common.sh@364 -- # uses["$mount"]=0 00:07:39.588 16:27:15 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:39.588 16:27:15 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:07:39.588 16:27:15 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:07:39.588 16:27:15 -- common/autotest_common.sh@363 -- # avails["$mount"]=6265167872 00:07:39.588 16:27:15 -- common/autotest_common.sh@363 -- # sizes["$mount"]=6266425344 00:07:39.588 16:27:15 -- common/autotest_common.sh@364 -- # uses["$mount"]=1257472 00:07:39.588 16:27:15 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:39.588 16:27:15 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:07:39.588 16:27:15 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:07:39.588 16:27:15 -- common/autotest_common.sh@363 -- # avails["$mount"]=2493755392 00:07:39.588 16:27:15 -- common/autotest_common.sh@363 -- # sizes["$mount"]=2506571776 00:07:39.588 16:27:15 -- common/autotest_common.sh@364 -- # uses["$mount"]=12816384 00:07:39.588 16:27:15 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:39.588 16:27:15 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda5 00:07:39.588 16:27:15 -- common/autotest_common.sh@362 -- # fss["$mount"]=btrfs 00:07:39.588 16:27:15 -- common/autotest_common.sh@363 -- # avails["$mount"]=13293748224 00:07:39.588 16:27:15 -- common/autotest_common.sh@363 -- # sizes["$mount"]=20314062848 00:07:39.588 16:27:15 -- common/autotest_common.sh@364 -- # uses["$mount"]=6289866752 00:07:39.588 16:27:15 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:39.588 16:27:15 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda2 00:07:39.588 16:27:15 -- common/autotest_common.sh@362 -- # fss["$mount"]=ext4 00:07:39.588 16:27:15 -- common/autotest_common.sh@363 -- # avails["$mount"]=840085504 00:07:39.588 16:27:15 -- common/autotest_common.sh@363 -- # sizes["$mount"]=1012768768 00:07:39.588 16:27:15 -- common/autotest_common.sh@364 -- # uses["$mount"]=103477248 00:07:39.588 16:27:15 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:39.588 16:27:15 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:07:39.588 16:27:15 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:07:39.588 16:27:15 -- common/autotest_common.sh@363 -- # avails["$mount"]=6266286080 00:07:39.588 16:27:15 -- common/autotest_common.sh@363 -- # sizes["$mount"]=6266425344 00:07:39.588 16:27:15 -- common/autotest_common.sh@364 -- # uses["$mount"]=139264 00:07:39.588 16:27:15 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:39.588 16:27:15 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda3 00:07:39.588 16:27:15 -- common/autotest_common.sh@362 -- # fss["$mount"]=vfat 00:07:39.588 16:27:15 -- common/autotest_common.sh@363 -- # avails["$mount"]=91617280 00:07:39.588 16:27:15 -- common/autotest_common.sh@363 -- # sizes["$mount"]=104607744 00:07:39.588 16:27:15 -- common/autotest_common.sh@364 -- # uses["$mount"]=12990464 00:07:39.588 16:27:15 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:39.588 16:27:15 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:07:39.588 16:27:15 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:07:39.588 16:27:15 -- common/autotest_common.sh@363 -- # avails["$mount"]=1253269504 00:07:39.588 16:27:15 -- common/autotest_common.sh@363 -- # sizes["$mount"]=1253281792 00:07:39.588 16:27:15 -- common/autotest_common.sh@364 -- # uses["$mount"]=12288 00:07:39.588 16:27:15 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:39.588 16:27:15 -- common/autotest_common.sh@362 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt/output 00:07:39.588 16:27:15 -- common/autotest_common.sh@362 -- # fss["$mount"]=fuse.sshfs 00:07:39.588 16:27:15 -- common/autotest_common.sh@363 -- # avails["$mount"]=98360635392 00:07:39.588 16:27:15 -- common/autotest_common.sh@363 -- # sizes["$mount"]=105088212992 00:07:39.588 16:27:15 -- common/autotest_common.sh@364 -- # uses["$mount"]=1342144512 00:07:39.588 16:27:15 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:39.588 16:27:15 -- common/autotest_common.sh@367 -- # printf '* Looking for test storage...\n' 00:07:39.588 * Looking for test storage... 00:07:39.588 16:27:15 -- common/autotest_common.sh@369 -- # local target_space new_size 00:07:39.588 16:27:15 -- common/autotest_common.sh@370 -- # for target_dir in "${storage_candidates[@]}" 00:07:39.588 16:27:15 -- common/autotest_common.sh@373 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:39.588 16:27:15 -- common/autotest_common.sh@373 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:39.588 16:27:15 -- common/autotest_common.sh@373 -- # mount=/home 00:07:39.588 16:27:15 -- common/autotest_common.sh@375 -- # target_space=13293748224 00:07:39.588 16:27:15 -- common/autotest_common.sh@376 -- # (( target_space == 0 || target_space < requested_size )) 00:07:39.588 16:27:15 -- common/autotest_common.sh@379 -- # (( target_space >= requested_size )) 00:07:39.588 16:27:15 -- common/autotest_common.sh@381 -- # [[ btrfs == tmpfs ]] 00:07:39.588 16:27:15 -- common/autotest_common.sh@381 -- # [[ btrfs == ramfs ]] 00:07:39.588 16:27:15 -- common/autotest_common.sh@381 -- # [[ /home == / ]] 00:07:39.588 16:27:15 -- common/autotest_common.sh@388 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:39.588 16:27:15 -- common/autotest_common.sh@388 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:39.589 16:27:15 -- common/autotest_common.sh@389 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:39.589 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:39.589 16:27:15 -- common/autotest_common.sh@390 -- # return 0 00:07:39.589 16:27:15 -- common/autotest_common.sh@1677 -- # set -o errtrace 00:07:39.589 16:27:15 -- common/autotest_common.sh@1678 -- # shopt -s extdebug 00:07:39.589 16:27:15 -- common/autotest_common.sh@1679 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:39.589 16:27:15 -- common/autotest_common.sh@1681 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:39.589 16:27:15 -- common/autotest_common.sh@1682 -- # true 00:07:39.589 16:27:15 -- common/autotest_common.sh@1684 -- # xtrace_fd 00:07:39.589 16:27:15 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:39.589 16:27:15 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:39.589 16:27:15 -- common/autotest_common.sh@27 -- # exec 00:07:39.589 16:27:15 -- common/autotest_common.sh@29 -- # exec 00:07:39.589 16:27:15 -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:39.589 16:27:15 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:39.589 16:27:15 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:39.589 16:27:15 -- common/autotest_common.sh@18 -- # set -x 00:07:39.589 16:27:15 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:39.589 16:27:15 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:39.589 16:27:15 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:39.589 16:27:15 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:39.589 16:27:15 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:39.589 16:27:15 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:39.589 16:27:15 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:39.589 16:27:15 -- scripts/common.sh@335 -- # IFS=.-: 00:07:39.589 16:27:15 -- scripts/common.sh@335 -- # read -ra ver1 00:07:39.589 16:27:15 -- scripts/common.sh@336 -- # IFS=.-: 00:07:39.589 16:27:15 -- scripts/common.sh@336 -- # read -ra ver2 00:07:39.589 16:27:15 -- scripts/common.sh@337 -- # local 'op=<' 00:07:39.589 16:27:15 -- scripts/common.sh@339 -- # ver1_l=2 00:07:39.589 16:27:15 -- scripts/common.sh@340 -- # ver2_l=1 00:07:39.589 16:27:15 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:39.589 16:27:15 -- scripts/common.sh@343 -- # case "$op" in 00:07:39.589 16:27:15 -- scripts/common.sh@344 -- # : 1 00:07:39.589 16:27:15 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:39.589 16:27:15 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:39.589 16:27:15 -- scripts/common.sh@364 -- # decimal 1 00:07:39.589 16:27:15 -- scripts/common.sh@352 -- # local d=1 00:07:39.589 16:27:15 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:39.589 16:27:15 -- scripts/common.sh@354 -- # echo 1 00:07:39.589 16:27:15 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:39.589 16:27:15 -- scripts/common.sh@365 -- # decimal 2 00:07:39.589 16:27:15 -- scripts/common.sh@352 -- # local d=2 00:07:39.589 16:27:15 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:39.589 16:27:15 -- scripts/common.sh@354 -- # echo 2 00:07:39.589 16:27:15 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:39.589 16:27:15 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:39.589 16:27:15 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:39.589 16:27:15 -- scripts/common.sh@367 -- # return 0 00:07:39.589 16:27:15 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:39.589 16:27:15 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:39.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.589 --rc genhtml_branch_coverage=1 00:07:39.589 --rc genhtml_function_coverage=1 00:07:39.589 --rc genhtml_legend=1 00:07:39.589 --rc geninfo_all_blocks=1 00:07:39.589 --rc geninfo_unexecuted_blocks=1 00:07:39.589 00:07:39.589 ' 00:07:39.589 16:27:15 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:39.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.589 --rc genhtml_branch_coverage=1 00:07:39.589 --rc genhtml_function_coverage=1 00:07:39.589 --rc genhtml_legend=1 00:07:39.589 --rc geninfo_all_blocks=1 00:07:39.589 --rc geninfo_unexecuted_blocks=1 00:07:39.589 00:07:39.589 ' 00:07:39.589 16:27:15 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:39.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.589 --rc genhtml_branch_coverage=1 00:07:39.589 --rc genhtml_function_coverage=1 00:07:39.589 --rc genhtml_legend=1 00:07:39.589 --rc geninfo_all_blocks=1 00:07:39.589 --rc geninfo_unexecuted_blocks=1 00:07:39.589 00:07:39.589 ' 00:07:39.589 16:27:15 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:39.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.589 --rc genhtml_branch_coverage=1 00:07:39.589 --rc genhtml_function_coverage=1 00:07:39.589 --rc genhtml_legend=1 00:07:39.589 --rc geninfo_all_blocks=1 00:07:39.589 --rc geninfo_unexecuted_blocks=1 00:07:39.589 00:07:39.589 ' 00:07:39.589 16:27:15 -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:39.589 16:27:15 -- nvmf/common.sh@7 -- # uname -s 00:07:39.589 16:27:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:39.589 16:27:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:39.589 16:27:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:39.589 16:27:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:39.589 16:27:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:39.589 16:27:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:39.589 16:27:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:39.589 16:27:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:39.589 16:27:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:39.589 16:27:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:39.589 16:27:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:07:39.589 16:27:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:07:39.589 16:27:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:39.589 16:27:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:39.589 16:27:15 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:39.589 16:27:15 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:39.589 16:27:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:39.589 16:27:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:39.589 16:27:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:39.589 16:27:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.589 16:27:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.589 16:27:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.589 16:27:15 -- paths/export.sh@5 -- # export PATH 00:07:39.589 16:27:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.589 16:27:15 -- nvmf/common.sh@46 -- # : 0 00:07:39.589 16:27:15 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:39.589 16:27:15 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:39.589 16:27:15 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:39.589 16:27:15 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:39.590 16:27:15 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:39.590 16:27:15 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:39.590 16:27:15 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:39.590 16:27:15 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:39.590 16:27:15 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:39.590 16:27:15 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:39.590 16:27:15 -- target/filesystem.sh@15 -- # nvmftestinit 00:07:39.590 16:27:15 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:39.590 16:27:15 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:39.590 16:27:15 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:39.590 16:27:15 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:39.590 16:27:15 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:39.590 16:27:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:39.590 16:27:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:39.590 16:27:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:39.590 16:27:15 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:07:39.590 16:27:15 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:07:39.590 16:27:15 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:07:39.590 16:27:15 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:07:39.590 16:27:15 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:07:39.590 16:27:15 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:07:39.590 16:27:15 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:39.590 16:27:15 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:39.590 16:27:15 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:39.590 16:27:15 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:07:39.590 16:27:15 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:39.590 16:27:15 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:39.590 16:27:15 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:39.590 16:27:15 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:39.590 16:27:15 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:39.590 16:27:15 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:39.590 16:27:15 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:39.590 16:27:15 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:39.590 16:27:15 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:07:39.590 16:27:15 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:07:39.590 Cannot find device "nvmf_tgt_br" 00:07:39.590 16:27:15 -- nvmf/common.sh@154 -- # true 00:07:39.590 16:27:15 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:07:39.590 Cannot find device "nvmf_tgt_br2" 00:07:39.590 16:27:15 -- nvmf/common.sh@155 -- # true 00:07:39.590 16:27:15 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:07:39.590 16:27:15 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:07:39.590 Cannot find device "nvmf_tgt_br" 00:07:39.590 16:27:15 -- nvmf/common.sh@157 -- # true 00:07:39.590 16:27:15 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:07:39.590 Cannot find device "nvmf_tgt_br2" 00:07:39.590 16:27:15 -- nvmf/common.sh@158 -- # true 00:07:39.590 16:27:15 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:07:39.590 16:27:15 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:07:39.590 16:27:15 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:39.590 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:39.590 16:27:15 -- nvmf/common.sh@161 -- # true 00:07:39.590 16:27:15 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:39.590 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:39.590 16:27:15 -- nvmf/common.sh@162 -- # true 00:07:39.590 16:27:15 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:07:39.590 16:27:16 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:39.590 16:27:16 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:39.590 16:27:16 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:39.590 16:27:16 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:39.590 16:27:16 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:39.590 16:27:16 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:39.590 16:27:16 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:39.590 16:27:16 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:39.590 16:27:16 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:07:39.590 16:27:16 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:07:39.590 16:27:16 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:07:39.590 16:27:16 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:07:39.590 16:27:16 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:39.590 16:27:16 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:39.590 16:27:16 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:39.590 16:27:16 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:07:39.590 16:27:16 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:07:39.590 16:27:16 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:07:39.590 16:27:16 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:39.590 16:27:16 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:39.590 16:27:16 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:39.590 16:27:16 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:39.590 16:27:16 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:07:39.590 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:39.590 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.111 ms 00:07:39.590 00:07:39.590 --- 10.0.0.2 ping statistics --- 00:07:39.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:39.590 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:07:39.590 16:27:16 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:07:39.590 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:39.590 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:07:39.590 00:07:39.590 --- 10.0.0.3 ping statistics --- 00:07:39.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:39.590 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:07:39.590 16:27:16 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:39.590 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:39.590 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:07:39.590 00:07:39.590 --- 10.0.0.1 ping statistics --- 00:07:39.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:39.590 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:07:39.590 16:27:16 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:39.590 16:27:16 -- nvmf/common.sh@421 -- # return 0 00:07:39.590 16:27:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:39.590 16:27:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:39.590 16:27:16 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:39.590 16:27:16 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:39.590 16:27:16 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:39.590 16:27:16 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:39.590 16:27:16 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:39.590 16:27:16 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:39.590 16:27:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:39.590 16:27:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:39.590 16:27:16 -- common/autotest_common.sh@10 -- # set +x 00:07:39.590 ************************************ 00:07:39.590 START TEST nvmf_filesystem_no_in_capsule 00:07:39.590 ************************************ 00:07:39.590 16:27:16 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_part 0 00:07:39.590 16:27:16 -- target/filesystem.sh@47 -- # in_capsule=0 00:07:39.590 16:27:16 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:39.590 16:27:16 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:39.590 16:27:16 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:39.590 16:27:16 -- common/autotest_common.sh@10 -- # set +x 00:07:39.590 16:27:16 -- nvmf/common.sh@469 -- # nvmfpid=72563 00:07:39.590 16:27:16 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:39.590 16:27:16 -- nvmf/common.sh@470 -- # waitforlisten 72563 00:07:39.590 16:27:16 -- common/autotest_common.sh@829 -- # '[' -z 72563 ']' 00:07:39.590 16:27:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.590 16:27:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:39.590 16:27:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.590 16:27:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:39.590 16:27:16 -- common/autotest_common.sh@10 -- # set +x 00:07:39.590 [2024-11-16 16:27:16.262508] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:39.590 [2024-11-16 16:27:16.262586] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:39.590 [2024-11-16 16:27:16.395181] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:39.590 [2024-11-16 16:27:16.471030] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:39.590 [2024-11-16 16:27:16.471222] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:39.590 [2024-11-16 16:27:16.471238] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:39.590 [2024-11-16 16:27:16.471247] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:39.590 [2024-11-16 16:27:16.471421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:39.590 [2024-11-16 16:27:16.472117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:39.590 [2024-11-16 16:27:16.472214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:39.590 [2024-11-16 16:27:16.472220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.850 16:27:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:39.850 16:27:17 -- common/autotest_common.sh@862 -- # return 0 00:07:39.850 16:27:17 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:39.850 16:27:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:39.850 16:27:17 -- common/autotest_common.sh@10 -- # set +x 00:07:39.850 16:27:17 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:39.850 16:27:17 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:39.850 16:27:17 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:39.850 16:27:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.850 16:27:17 -- common/autotest_common.sh@10 -- # set +x 00:07:39.850 [2024-11-16 16:27:17.302864] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:39.850 16:27:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.850 16:27:17 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:39.850 16:27:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.850 16:27:17 -- common/autotest_common.sh@10 -- # set +x 00:07:40.108 Malloc1 00:07:40.108 16:27:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.108 16:27:17 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:40.108 16:27:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.108 16:27:17 -- common/autotest_common.sh@10 -- # set +x 00:07:40.108 16:27:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.108 16:27:17 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:40.108 16:27:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.108 16:27:17 -- common/autotest_common.sh@10 -- # set +x 00:07:40.108 16:27:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.108 16:27:17 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:40.108 16:27:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.108 16:27:17 -- common/autotest_common.sh@10 -- # set +x 00:07:40.108 [2024-11-16 16:27:17.534817] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:40.108 16:27:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.108 16:27:17 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:40.108 16:27:17 -- common/autotest_common.sh@1367 -- # local bdev_name=Malloc1 00:07:40.108 16:27:17 -- common/autotest_common.sh@1368 -- # local bdev_info 00:07:40.108 16:27:17 -- common/autotest_common.sh@1369 -- # local bs 00:07:40.108 16:27:17 -- common/autotest_common.sh@1370 -- # local nb 00:07:40.108 16:27:17 -- common/autotest_common.sh@1371 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:40.108 16:27:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.108 16:27:17 -- common/autotest_common.sh@10 -- # set +x 00:07:40.108 16:27:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.108 16:27:17 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:07:40.108 { 00:07:40.108 "aliases": [ 00:07:40.108 "18f596d4-c524-44db-936c-9a53d1d9be88" 00:07:40.108 ], 00:07:40.108 "assigned_rate_limits": { 00:07:40.108 "r_mbytes_per_sec": 0, 00:07:40.108 "rw_ios_per_sec": 0, 00:07:40.108 "rw_mbytes_per_sec": 0, 00:07:40.108 "w_mbytes_per_sec": 0 00:07:40.108 }, 00:07:40.108 "block_size": 512, 00:07:40.108 "claim_type": "exclusive_write", 00:07:40.108 "claimed": true, 00:07:40.108 "driver_specific": {}, 00:07:40.108 "memory_domains": [ 00:07:40.108 { 00:07:40.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:40.108 "dma_device_type": 2 00:07:40.108 } 00:07:40.108 ], 00:07:40.108 "name": "Malloc1", 00:07:40.108 "num_blocks": 1048576, 00:07:40.108 "product_name": "Malloc disk", 00:07:40.108 "supported_io_types": { 00:07:40.108 "abort": true, 00:07:40.108 "compare": false, 00:07:40.108 "compare_and_write": false, 00:07:40.108 "flush": true, 00:07:40.108 "nvme_admin": false, 00:07:40.108 "nvme_io": false, 00:07:40.108 "read": true, 00:07:40.108 "reset": true, 00:07:40.108 "unmap": true, 00:07:40.108 "write": true, 00:07:40.108 "write_zeroes": true 00:07:40.108 }, 00:07:40.108 "uuid": "18f596d4-c524-44db-936c-9a53d1d9be88", 00:07:40.108 "zoned": false 00:07:40.108 } 00:07:40.108 ]' 00:07:40.108 16:27:17 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:07:40.367 16:27:17 -- common/autotest_common.sh@1372 -- # bs=512 00:07:40.367 16:27:17 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:07:40.367 16:27:17 -- common/autotest_common.sh@1373 -- # nb=1048576 00:07:40.367 16:27:17 -- common/autotest_common.sh@1376 -- # bdev_size=512 00:07:40.367 16:27:17 -- common/autotest_common.sh@1377 -- # echo 512 00:07:40.367 16:27:17 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:40.367 16:27:17 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 --hostid=dcaf3c85-349e-474a-91c8-b5dfcb47b007 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:40.625 16:27:17 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:40.625 16:27:17 -- common/autotest_common.sh@1187 -- # local i=0 00:07:40.625 16:27:17 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:07:40.625 16:27:17 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:07:40.625 16:27:17 -- common/autotest_common.sh@1194 -- # sleep 2 00:07:42.524 16:27:19 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:07:42.524 16:27:19 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:07:42.524 16:27:19 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:07:42.524 16:27:19 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:07:42.524 16:27:19 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:07:42.524 16:27:19 -- common/autotest_common.sh@1197 -- # return 0 00:07:42.524 16:27:19 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:42.524 16:27:19 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:42.524 16:27:19 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:42.524 16:27:19 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:42.524 16:27:19 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:42.524 16:27:19 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:42.524 16:27:19 -- setup/common.sh@80 -- # echo 536870912 00:07:42.524 16:27:19 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:42.524 16:27:19 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:42.524 16:27:19 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:42.524 16:27:19 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:42.524 16:27:19 -- target/filesystem.sh@69 -- # partprobe 00:07:42.783 16:27:20 -- target/filesystem.sh@70 -- # sleep 1 00:07:43.718 16:27:21 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:43.718 16:27:21 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:43.718 16:27:21 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:43.718 16:27:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:43.718 16:27:21 -- common/autotest_common.sh@10 -- # set +x 00:07:43.718 ************************************ 00:07:43.718 START TEST filesystem_ext4 00:07:43.718 ************************************ 00:07:43.718 16:27:21 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:43.718 16:27:21 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:43.718 16:27:21 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:43.718 16:27:21 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:43.718 16:27:21 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:07:43.718 16:27:21 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:43.718 16:27:21 -- common/autotest_common.sh@914 -- # local i=0 00:07:43.718 16:27:21 -- common/autotest_common.sh@915 -- # local force 00:07:43.718 16:27:21 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:07:43.718 16:27:21 -- common/autotest_common.sh@918 -- # force=-F 00:07:43.718 16:27:21 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:43.718 mke2fs 1.47.0 (5-Feb-2023) 00:07:43.976 Discarding device blocks: 0/522240 done 00:07:43.976 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:43.976 Filesystem UUID: 74710531-e048-4025-8f19-2ce9918a658e 00:07:43.976 Superblock backups stored on blocks: 00:07:43.976 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:43.976 00:07:43.976 Allocating group tables: 0/64 done 00:07:43.976 Writing inode tables: 0/64 done 00:07:43.976 Creating journal (8192 blocks): done 00:07:43.976 Writing superblocks and filesystem accounting information: 0/64 done 00:07:43.976 00:07:43.976 16:27:21 -- common/autotest_common.sh@931 -- # return 0 00:07:43.976 16:27:21 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:49.278 16:27:26 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:49.278 16:27:26 -- target/filesystem.sh@25 -- # sync 00:07:49.278 16:27:26 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:49.278 16:27:26 -- target/filesystem.sh@27 -- # sync 00:07:49.278 16:27:26 -- target/filesystem.sh@29 -- # i=0 00:07:49.278 16:27:26 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:49.278 16:27:26 -- target/filesystem.sh@37 -- # kill -0 72563 00:07:49.278 16:27:26 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:49.278 16:27:26 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:49.278 16:27:26 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:49.278 16:27:26 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:49.278 00:07:49.278 real 0m5.629s 00:07:49.278 user 0m0.025s 00:07:49.278 sys 0m0.068s 00:07:49.278 16:27:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:49.278 16:27:26 -- common/autotest_common.sh@10 -- # set +x 00:07:49.278 ************************************ 00:07:49.278 END TEST filesystem_ext4 00:07:49.278 ************************************ 00:07:49.278 16:27:26 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:49.278 16:27:26 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:49.278 16:27:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:49.278 16:27:26 -- common/autotest_common.sh@10 -- # set +x 00:07:49.278 ************************************ 00:07:49.278 START TEST filesystem_btrfs 00:07:49.278 ************************************ 00:07:49.278 16:27:26 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:49.278 16:27:26 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:49.278 16:27:26 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:49.278 16:27:26 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:49.278 16:27:26 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:07:49.279 16:27:26 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:49.279 16:27:26 -- common/autotest_common.sh@914 -- # local i=0 00:07:49.279 16:27:26 -- common/autotest_common.sh@915 -- # local force 00:07:49.279 16:27:26 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:07:49.279 16:27:26 -- common/autotest_common.sh@920 -- # force=-f 00:07:49.279 16:27:26 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:49.537 btrfs-progs v6.8.1 00:07:49.537 See https://btrfs.readthedocs.io for more information. 00:07:49.537 00:07:49.537 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:49.537 NOTE: several default settings have changed in version 5.15, please make sure 00:07:49.537 this does not affect your deployments: 00:07:49.537 - DUP for metadata (-m dup) 00:07:49.537 - enabled no-holes (-O no-holes) 00:07:49.537 - enabled free-space-tree (-R free-space-tree) 00:07:49.537 00:07:49.537 Label: (null) 00:07:49.537 UUID: 29370d66-c601-4972-a911-a6af1de0308d 00:07:49.537 Node size: 16384 00:07:49.537 Sector size: 4096 (CPU page size: 4096) 00:07:49.537 Filesystem size: 510.00MiB 00:07:49.537 Block group profiles: 00:07:49.537 Data: single 8.00MiB 00:07:49.537 Metadata: DUP 32.00MiB 00:07:49.537 System: DUP 8.00MiB 00:07:49.537 SSD detected: yes 00:07:49.537 Zoned device: no 00:07:49.537 Features: extref, skinny-metadata, no-holes, free-space-tree 00:07:49.537 Checksum: crc32c 00:07:49.537 Number of devices: 1 00:07:49.537 Devices: 00:07:49.537 ID SIZE PATH 00:07:49.537 1 510.00MiB /dev/nvme0n1p1 00:07:49.537 00:07:49.537 16:27:26 -- common/autotest_common.sh@931 -- # return 0 00:07:49.537 16:27:26 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:49.537 16:27:27 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:49.537 16:27:27 -- target/filesystem.sh@25 -- # sync 00:07:49.796 16:27:27 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:49.796 16:27:27 -- target/filesystem.sh@27 -- # sync 00:07:49.796 16:27:27 -- target/filesystem.sh@29 -- # i=0 00:07:49.796 16:27:27 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:49.796 16:27:27 -- target/filesystem.sh@37 -- # kill -0 72563 00:07:49.796 16:27:27 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:49.796 16:27:27 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:49.796 16:27:27 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:49.796 16:27:27 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:49.796 00:07:49.796 real 0m0.316s 00:07:49.796 user 0m0.015s 00:07:49.796 sys 0m0.069s 00:07:49.796 16:27:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:49.796 16:27:27 -- common/autotest_common.sh@10 -- # set +x 00:07:49.796 ************************************ 00:07:49.796 END TEST filesystem_btrfs 00:07:49.796 ************************************ 00:07:49.796 16:27:27 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:49.796 16:27:27 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:49.796 16:27:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:49.796 16:27:27 -- common/autotest_common.sh@10 -- # set +x 00:07:49.796 ************************************ 00:07:49.796 START TEST filesystem_xfs 00:07:49.796 ************************************ 00:07:49.796 16:27:27 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create xfs nvme0n1 00:07:49.796 16:27:27 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:49.796 16:27:27 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:49.796 16:27:27 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:49.796 16:27:27 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:07:49.796 16:27:27 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:49.796 16:27:27 -- common/autotest_common.sh@914 -- # local i=0 00:07:49.796 16:27:27 -- common/autotest_common.sh@915 -- # local force 00:07:49.796 16:27:27 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:07:49.796 16:27:27 -- common/autotest_common.sh@920 -- # force=-f 00:07:49.796 16:27:27 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:49.796 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:49.796 = sectsz=512 attr=2, projid32bit=1 00:07:49.796 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:49.796 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:49.796 data = bsize=4096 blocks=130560, imaxpct=25 00:07:49.796 = sunit=0 swidth=0 blks 00:07:49.796 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:49.796 log =internal log bsize=4096 blocks=16384, version=2 00:07:49.796 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:49.796 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:50.732 Discarding blocks...Done. 00:07:50.732 16:27:27 -- common/autotest_common.sh@931 -- # return 0 00:07:50.732 16:27:27 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:53.264 16:27:30 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:53.264 16:27:30 -- target/filesystem.sh@25 -- # sync 00:07:53.264 16:27:30 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:53.264 16:27:30 -- target/filesystem.sh@27 -- # sync 00:07:53.264 16:27:30 -- target/filesystem.sh@29 -- # i=0 00:07:53.264 16:27:30 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:53.264 16:27:30 -- target/filesystem.sh@37 -- # kill -0 72563 00:07:53.264 16:27:30 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:53.265 16:27:30 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:53.265 16:27:30 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:53.265 16:27:30 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:53.265 ************************************ 00:07:53.265 END TEST filesystem_xfs 00:07:53.265 ************************************ 00:07:53.265 00:07:53.265 real 0m3.185s 00:07:53.265 user 0m0.024s 00:07:53.265 sys 0m0.063s 00:07:53.265 16:27:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:53.265 16:27:30 -- common/autotest_common.sh@10 -- # set +x 00:07:53.265 16:27:30 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:53.265 16:27:30 -- target/filesystem.sh@93 -- # sync 00:07:53.265 16:27:30 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:53.265 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:53.265 16:27:30 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:53.265 16:27:30 -- common/autotest_common.sh@1208 -- # local i=0 00:07:53.265 16:27:30 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:07:53.265 16:27:30 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:53.265 16:27:30 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:07:53.265 16:27:30 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:53.265 16:27:30 -- common/autotest_common.sh@1220 -- # return 0 00:07:53.265 16:27:30 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:53.265 16:27:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.265 16:27:30 -- common/autotest_common.sh@10 -- # set +x 00:07:53.265 16:27:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.265 16:27:30 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:53.265 16:27:30 -- target/filesystem.sh@101 -- # killprocess 72563 00:07:53.265 16:27:30 -- common/autotest_common.sh@936 -- # '[' -z 72563 ']' 00:07:53.265 16:27:30 -- common/autotest_common.sh@940 -- # kill -0 72563 00:07:53.265 16:27:30 -- common/autotest_common.sh@941 -- # uname 00:07:53.265 16:27:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:53.265 16:27:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72563 00:07:53.265 killing process with pid 72563 00:07:53.265 16:27:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:53.265 16:27:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:53.265 16:27:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72563' 00:07:53.265 16:27:30 -- common/autotest_common.sh@955 -- # kill 72563 00:07:53.265 16:27:30 -- common/autotest_common.sh@960 -- # wait 72563 00:07:53.832 16:27:31 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:53.832 00:07:53.832 real 0m14.828s 00:07:53.832 user 0m57.225s 00:07:53.832 sys 0m1.696s 00:07:53.832 16:27:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:53.832 16:27:31 -- common/autotest_common.sh@10 -- # set +x 00:07:53.832 ************************************ 00:07:53.832 END TEST nvmf_filesystem_no_in_capsule 00:07:53.832 ************************************ 00:07:53.832 16:27:31 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:53.832 16:27:31 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:53.832 16:27:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:53.832 16:27:31 -- common/autotest_common.sh@10 -- # set +x 00:07:53.832 ************************************ 00:07:53.832 START TEST nvmf_filesystem_in_capsule 00:07:53.832 ************************************ 00:07:53.832 16:27:31 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_part 4096 00:07:53.832 16:27:31 -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:53.832 16:27:31 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:53.832 16:27:31 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:53.832 16:27:31 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:53.832 16:27:31 -- common/autotest_common.sh@10 -- # set +x 00:07:53.832 16:27:31 -- nvmf/common.sh@469 -- # nvmfpid=72940 00:07:53.832 16:27:31 -- nvmf/common.sh@470 -- # waitforlisten 72940 00:07:53.832 16:27:31 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:53.832 16:27:31 -- common/autotest_common.sh@829 -- # '[' -z 72940 ']' 00:07:53.832 16:27:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.832 16:27:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:53.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.832 16:27:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.832 16:27:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:53.832 16:27:31 -- common/autotest_common.sh@10 -- # set +x 00:07:53.832 [2024-11-16 16:27:31.146948] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:53.832 [2024-11-16 16:27:31.147649] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:53.832 [2024-11-16 16:27:31.289063] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:54.091 [2024-11-16 16:27:31.364436] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:54.091 [2024-11-16 16:27:31.364591] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:54.091 [2024-11-16 16:27:31.364606] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:54.091 [2024-11-16 16:27:31.364614] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:54.091 [2024-11-16 16:27:31.364695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:54.091 [2024-11-16 16:27:31.365322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:54.091 [2024-11-16 16:27:31.365501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:54.091 [2024-11-16 16:27:31.365523] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.025 16:27:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:55.026 16:27:32 -- common/autotest_common.sh@862 -- # return 0 00:07:55.026 16:27:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:55.026 16:27:32 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:55.026 16:27:32 -- common/autotest_common.sh@10 -- # set +x 00:07:55.026 16:27:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:55.026 16:27:32 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:55.026 16:27:32 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:55.026 16:27:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.026 16:27:32 -- common/autotest_common.sh@10 -- # set +x 00:07:55.026 [2024-11-16 16:27:32.197497] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:55.026 16:27:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.026 16:27:32 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:55.026 16:27:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.026 16:27:32 -- common/autotest_common.sh@10 -- # set +x 00:07:55.026 Malloc1 00:07:55.026 16:27:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.026 16:27:32 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:55.026 16:27:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.026 16:27:32 -- common/autotest_common.sh@10 -- # set +x 00:07:55.026 16:27:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.026 16:27:32 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:55.026 16:27:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.026 16:27:32 -- common/autotest_common.sh@10 -- # set +x 00:07:55.026 16:27:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.026 16:27:32 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:55.026 16:27:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.026 16:27:32 -- common/autotest_common.sh@10 -- # set +x 00:07:55.026 [2024-11-16 16:27:32.429691] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:55.026 16:27:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.026 16:27:32 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:55.026 16:27:32 -- common/autotest_common.sh@1367 -- # local bdev_name=Malloc1 00:07:55.026 16:27:32 -- common/autotest_common.sh@1368 -- # local bdev_info 00:07:55.026 16:27:32 -- common/autotest_common.sh@1369 -- # local bs 00:07:55.026 16:27:32 -- common/autotest_common.sh@1370 -- # local nb 00:07:55.026 16:27:32 -- common/autotest_common.sh@1371 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:55.026 16:27:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.026 16:27:32 -- common/autotest_common.sh@10 -- # set +x 00:07:55.026 16:27:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.026 16:27:32 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:07:55.026 { 00:07:55.026 "aliases": [ 00:07:55.026 "fdaf3578-7829-4e19-88cc-935143109cf5" 00:07:55.026 ], 00:07:55.026 "assigned_rate_limits": { 00:07:55.026 "r_mbytes_per_sec": 0, 00:07:55.026 "rw_ios_per_sec": 0, 00:07:55.026 "rw_mbytes_per_sec": 0, 00:07:55.026 "w_mbytes_per_sec": 0 00:07:55.026 }, 00:07:55.026 "block_size": 512, 00:07:55.026 "claim_type": "exclusive_write", 00:07:55.026 "claimed": true, 00:07:55.026 "driver_specific": {}, 00:07:55.026 "memory_domains": [ 00:07:55.026 { 00:07:55.026 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:55.026 "dma_device_type": 2 00:07:55.026 } 00:07:55.026 ], 00:07:55.026 "name": "Malloc1", 00:07:55.026 "num_blocks": 1048576, 00:07:55.026 "product_name": "Malloc disk", 00:07:55.026 "supported_io_types": { 00:07:55.026 "abort": true, 00:07:55.026 "compare": false, 00:07:55.026 "compare_and_write": false, 00:07:55.026 "flush": true, 00:07:55.026 "nvme_admin": false, 00:07:55.026 "nvme_io": false, 00:07:55.026 "read": true, 00:07:55.026 "reset": true, 00:07:55.026 "unmap": true, 00:07:55.026 "write": true, 00:07:55.026 "write_zeroes": true 00:07:55.026 }, 00:07:55.026 "uuid": "fdaf3578-7829-4e19-88cc-935143109cf5", 00:07:55.026 "zoned": false 00:07:55.026 } 00:07:55.026 ]' 00:07:55.026 16:27:32 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:07:55.026 16:27:32 -- common/autotest_common.sh@1372 -- # bs=512 00:07:55.026 16:27:32 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:07:55.284 16:27:32 -- common/autotest_common.sh@1373 -- # nb=1048576 00:07:55.284 16:27:32 -- common/autotest_common.sh@1376 -- # bdev_size=512 00:07:55.284 16:27:32 -- common/autotest_common.sh@1377 -- # echo 512 00:07:55.284 16:27:32 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:55.284 16:27:32 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 --hostid=dcaf3c85-349e-474a-91c8-b5dfcb47b007 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:55.284 16:27:32 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:55.284 16:27:32 -- common/autotest_common.sh@1187 -- # local i=0 00:07:55.284 16:27:32 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:07:55.284 16:27:32 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:07:55.284 16:27:32 -- common/autotest_common.sh@1194 -- # sleep 2 00:07:57.819 16:27:34 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:07:57.819 16:27:34 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:07:57.819 16:27:34 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:07:57.819 16:27:34 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:07:57.819 16:27:34 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:07:57.819 16:27:34 -- common/autotest_common.sh@1197 -- # return 0 00:07:57.819 16:27:34 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:57.819 16:27:34 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:57.819 16:27:34 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:57.819 16:27:34 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:57.819 16:27:34 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:57.819 16:27:34 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:57.819 16:27:34 -- setup/common.sh@80 -- # echo 536870912 00:07:57.819 16:27:34 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:57.819 16:27:34 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:57.819 16:27:34 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:57.819 16:27:34 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:57.819 16:27:34 -- target/filesystem.sh@69 -- # partprobe 00:07:57.819 16:27:34 -- target/filesystem.sh@70 -- # sleep 1 00:07:58.754 16:27:35 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:58.754 16:27:35 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:58.754 16:27:35 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:58.754 16:27:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:58.754 16:27:35 -- common/autotest_common.sh@10 -- # set +x 00:07:58.754 ************************************ 00:07:58.754 START TEST filesystem_in_capsule_ext4 00:07:58.754 ************************************ 00:07:58.754 16:27:35 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:58.754 16:27:35 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:58.754 16:27:35 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:58.754 16:27:35 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:58.754 16:27:35 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:07:58.754 16:27:35 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:58.754 16:27:35 -- common/autotest_common.sh@914 -- # local i=0 00:07:58.754 16:27:35 -- common/autotest_common.sh@915 -- # local force 00:07:58.754 16:27:35 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:07:58.754 16:27:35 -- common/autotest_common.sh@918 -- # force=-F 00:07:58.754 16:27:35 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:58.754 mke2fs 1.47.0 (5-Feb-2023) 00:07:58.754 Discarding device blocks: 0/522240 done 00:07:58.754 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:58.754 Filesystem UUID: 31894e1b-7ed4-48fd-8b98-dc2faedba26c 00:07:58.754 Superblock backups stored on blocks: 00:07:58.754 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:58.754 00:07:58.754 Allocating group tables: 0/64 done 00:07:58.754 Writing inode tables: 0/64 done 00:07:58.754 Creating journal (8192 blocks): done 00:07:58.754 Writing superblocks and filesystem accounting information: 0/64 done 00:07:58.754 00:07:58.754 16:27:36 -- common/autotest_common.sh@931 -- # return 0 00:07:58.754 16:27:36 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:04.020 16:27:41 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:04.020 16:27:41 -- target/filesystem.sh@25 -- # sync 00:08:04.279 16:27:41 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:04.279 16:27:41 -- target/filesystem.sh@27 -- # sync 00:08:04.279 16:27:41 -- target/filesystem.sh@29 -- # i=0 00:08:04.279 16:27:41 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:04.279 16:27:41 -- target/filesystem.sh@37 -- # kill -0 72940 00:08:04.279 16:27:41 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:04.279 16:27:41 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:04.279 16:27:41 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:04.279 16:27:41 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:04.279 00:08:04.279 real 0m5.677s 00:08:04.279 user 0m0.017s 00:08:04.279 sys 0m0.073s 00:08:04.279 16:27:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:04.279 16:27:41 -- common/autotest_common.sh@10 -- # set +x 00:08:04.279 ************************************ 00:08:04.279 END TEST filesystem_in_capsule_ext4 00:08:04.279 ************************************ 00:08:04.279 16:27:41 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:04.279 16:27:41 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:04.279 16:27:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:04.279 16:27:41 -- common/autotest_common.sh@10 -- # set +x 00:08:04.279 ************************************ 00:08:04.279 START TEST filesystem_in_capsule_btrfs 00:08:04.279 ************************************ 00:08:04.279 16:27:41 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:04.279 16:27:41 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:04.279 16:27:41 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:04.279 16:27:41 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:04.279 16:27:41 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:08:04.279 16:27:41 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:04.279 16:27:41 -- common/autotest_common.sh@914 -- # local i=0 00:08:04.279 16:27:41 -- common/autotest_common.sh@915 -- # local force 00:08:04.279 16:27:41 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:08:04.279 16:27:41 -- common/autotest_common.sh@920 -- # force=-f 00:08:04.279 16:27:41 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:04.536 btrfs-progs v6.8.1 00:08:04.536 See https://btrfs.readthedocs.io for more information. 00:08:04.536 00:08:04.536 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:04.536 NOTE: several default settings have changed in version 5.15, please make sure 00:08:04.536 this does not affect your deployments: 00:08:04.536 - DUP for metadata (-m dup) 00:08:04.536 - enabled no-holes (-O no-holes) 00:08:04.536 - enabled free-space-tree (-R free-space-tree) 00:08:04.536 00:08:04.536 Label: (null) 00:08:04.536 UUID: 27222aed-602c-475c-975f-554ae994cae9 00:08:04.536 Node size: 16384 00:08:04.536 Sector size: 4096 (CPU page size: 4096) 00:08:04.536 Filesystem size: 510.00MiB 00:08:04.536 Block group profiles: 00:08:04.536 Data: single 8.00MiB 00:08:04.536 Metadata: DUP 32.00MiB 00:08:04.536 System: DUP 8.00MiB 00:08:04.536 SSD detected: yes 00:08:04.536 Zoned device: no 00:08:04.536 Features: extref, skinny-metadata, no-holes, free-space-tree 00:08:04.536 Checksum: crc32c 00:08:04.536 Number of devices: 1 00:08:04.536 Devices: 00:08:04.536 ID SIZE PATH 00:08:04.536 1 510.00MiB /dev/nvme0n1p1 00:08:04.536 00:08:04.536 16:27:41 -- common/autotest_common.sh@931 -- # return 0 00:08:04.537 16:27:41 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:04.537 16:27:41 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:04.537 16:27:41 -- target/filesystem.sh@25 -- # sync 00:08:04.537 16:27:41 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:04.537 16:27:41 -- target/filesystem.sh@27 -- # sync 00:08:04.537 16:27:41 -- target/filesystem.sh@29 -- # i=0 00:08:04.537 16:27:41 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:04.537 16:27:41 -- target/filesystem.sh@37 -- # kill -0 72940 00:08:04.537 16:27:41 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:04.537 16:27:41 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:04.537 16:27:41 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:04.537 16:27:41 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:04.537 00:08:04.537 real 0m0.254s 00:08:04.537 user 0m0.020s 00:08:04.537 sys 0m0.065s 00:08:04.537 16:27:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:04.537 16:27:41 -- common/autotest_common.sh@10 -- # set +x 00:08:04.537 ************************************ 00:08:04.537 END TEST filesystem_in_capsule_btrfs 00:08:04.537 ************************************ 00:08:04.537 16:27:41 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:04.537 16:27:41 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:04.537 16:27:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:04.537 16:27:41 -- common/autotest_common.sh@10 -- # set +x 00:08:04.537 ************************************ 00:08:04.537 START TEST filesystem_in_capsule_xfs 00:08:04.537 ************************************ 00:08:04.537 16:27:41 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create xfs nvme0n1 00:08:04.537 16:27:41 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:04.537 16:27:41 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:04.537 16:27:41 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:04.537 16:27:41 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:08:04.537 16:27:41 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:04.537 16:27:41 -- common/autotest_common.sh@914 -- # local i=0 00:08:04.537 16:27:41 -- common/autotest_common.sh@915 -- # local force 00:08:04.537 16:27:41 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:08:04.537 16:27:41 -- common/autotest_common.sh@920 -- # force=-f 00:08:04.537 16:27:41 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:04.537 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:04.537 = sectsz=512 attr=2, projid32bit=1 00:08:04.537 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:04.537 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:04.537 data = bsize=4096 blocks=130560, imaxpct=25 00:08:04.537 = sunit=0 swidth=0 blks 00:08:04.537 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:04.537 log =internal log bsize=4096 blocks=16384, version=2 00:08:04.537 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:04.537 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:05.472 Discarding blocks...Done. 00:08:05.472 16:27:42 -- common/autotest_common.sh@931 -- # return 0 00:08:05.472 16:27:42 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:07.372 16:27:44 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:07.372 16:27:44 -- target/filesystem.sh@25 -- # sync 00:08:07.372 16:27:44 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:07.372 16:27:44 -- target/filesystem.sh@27 -- # sync 00:08:07.372 16:27:44 -- target/filesystem.sh@29 -- # i=0 00:08:07.372 16:27:44 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:07.372 16:27:44 -- target/filesystem.sh@37 -- # kill -0 72940 00:08:07.372 16:27:44 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:07.372 16:27:44 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:07.372 16:27:44 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:07.372 16:27:44 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:07.372 00:08:07.372 real 0m2.618s 00:08:07.372 user 0m0.011s 00:08:07.372 sys 0m0.069s 00:08:07.372 16:27:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:07.372 16:27:44 -- common/autotest_common.sh@10 -- # set +x 00:08:07.372 ************************************ 00:08:07.372 END TEST filesystem_in_capsule_xfs 00:08:07.372 ************************************ 00:08:07.372 16:27:44 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:07.372 16:27:44 -- target/filesystem.sh@93 -- # sync 00:08:07.372 16:27:44 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:07.372 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:07.372 16:27:44 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:07.372 16:27:44 -- common/autotest_common.sh@1208 -- # local i=0 00:08:07.372 16:27:44 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:08:07.372 16:27:44 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:07.372 16:27:44 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:08:07.372 16:27:44 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:07.372 16:27:44 -- common/autotest_common.sh@1220 -- # return 0 00:08:07.372 16:27:44 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:07.372 16:27:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.372 16:27:44 -- common/autotest_common.sh@10 -- # set +x 00:08:07.372 16:27:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.372 16:27:44 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:07.372 16:27:44 -- target/filesystem.sh@101 -- # killprocess 72940 00:08:07.372 16:27:44 -- common/autotest_common.sh@936 -- # '[' -z 72940 ']' 00:08:07.372 16:27:44 -- common/autotest_common.sh@940 -- # kill -0 72940 00:08:07.372 16:27:44 -- common/autotest_common.sh@941 -- # uname 00:08:07.372 16:27:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:07.372 16:27:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72940 00:08:07.372 16:27:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:07.372 16:27:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:07.372 16:27:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72940' 00:08:07.372 killing process with pid 72940 00:08:07.372 16:27:44 -- common/autotest_common.sh@955 -- # kill 72940 00:08:07.372 16:27:44 -- common/autotest_common.sh@960 -- # wait 72940 00:08:07.966 16:27:45 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:07.966 00:08:07.966 real 0m14.189s 00:08:07.966 user 0m54.740s 00:08:07.966 sys 0m1.618s 00:08:07.966 16:27:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:07.966 16:27:45 -- common/autotest_common.sh@10 -- # set +x 00:08:07.966 ************************************ 00:08:07.966 END TEST nvmf_filesystem_in_capsule 00:08:07.966 ************************************ 00:08:07.966 16:27:45 -- target/filesystem.sh@108 -- # nvmftestfini 00:08:07.966 16:27:45 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:07.966 16:27:45 -- nvmf/common.sh@116 -- # sync 00:08:07.966 16:27:45 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:07.966 16:27:45 -- nvmf/common.sh@119 -- # set +e 00:08:07.966 16:27:45 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:07.966 16:27:45 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:07.966 rmmod nvme_tcp 00:08:07.966 rmmod nvme_fabrics 00:08:07.966 rmmod nvme_keyring 00:08:07.966 16:27:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:07.966 16:27:45 -- nvmf/common.sh@123 -- # set -e 00:08:07.966 16:27:45 -- nvmf/common.sh@124 -- # return 0 00:08:07.966 16:27:45 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:08:07.966 16:27:45 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:07.966 16:27:45 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:07.966 16:27:45 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:07.966 16:27:45 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:07.966 16:27:45 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:07.966 16:27:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:07.966 16:27:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:07.966 16:27:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:08.225 16:27:45 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:08.225 00:08:08.225 real 0m29.988s 00:08:08.225 user 1m52.276s 00:08:08.225 sys 0m3.794s 00:08:08.225 16:27:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:08.225 16:27:45 -- common/autotest_common.sh@10 -- # set +x 00:08:08.225 ************************************ 00:08:08.225 END TEST nvmf_filesystem 00:08:08.225 ************************************ 00:08:08.225 16:27:45 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:08.225 16:27:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:08.225 16:27:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:08.225 16:27:45 -- common/autotest_common.sh@10 -- # set +x 00:08:08.225 ************************************ 00:08:08.225 START TEST nvmf_discovery 00:08:08.225 ************************************ 00:08:08.225 16:27:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:08.225 * Looking for test storage... 00:08:08.225 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:08.225 16:27:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:08.225 16:27:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:08.225 16:27:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:08.225 16:27:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:08.225 16:27:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:08.225 16:27:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:08.225 16:27:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:08.225 16:27:45 -- scripts/common.sh@335 -- # IFS=.-: 00:08:08.225 16:27:45 -- scripts/common.sh@335 -- # read -ra ver1 00:08:08.225 16:27:45 -- scripts/common.sh@336 -- # IFS=.-: 00:08:08.225 16:27:45 -- scripts/common.sh@336 -- # read -ra ver2 00:08:08.225 16:27:45 -- scripts/common.sh@337 -- # local 'op=<' 00:08:08.225 16:27:45 -- scripts/common.sh@339 -- # ver1_l=2 00:08:08.225 16:27:45 -- scripts/common.sh@340 -- # ver2_l=1 00:08:08.225 16:27:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:08.225 16:27:45 -- scripts/common.sh@343 -- # case "$op" in 00:08:08.225 16:27:45 -- scripts/common.sh@344 -- # : 1 00:08:08.225 16:27:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:08.225 16:27:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:08.225 16:27:45 -- scripts/common.sh@364 -- # decimal 1 00:08:08.225 16:27:45 -- scripts/common.sh@352 -- # local d=1 00:08:08.225 16:27:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:08.225 16:27:45 -- scripts/common.sh@354 -- # echo 1 00:08:08.225 16:27:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:08.225 16:27:45 -- scripts/common.sh@365 -- # decimal 2 00:08:08.225 16:27:45 -- scripts/common.sh@352 -- # local d=2 00:08:08.225 16:27:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:08.225 16:27:45 -- scripts/common.sh@354 -- # echo 2 00:08:08.225 16:27:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:08.225 16:27:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:08.225 16:27:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:08.225 16:27:45 -- scripts/common.sh@367 -- # return 0 00:08:08.225 16:27:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:08.225 16:27:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:08.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.225 --rc genhtml_branch_coverage=1 00:08:08.225 --rc genhtml_function_coverage=1 00:08:08.226 --rc genhtml_legend=1 00:08:08.226 --rc geninfo_all_blocks=1 00:08:08.226 --rc geninfo_unexecuted_blocks=1 00:08:08.226 00:08:08.226 ' 00:08:08.226 16:27:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:08.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.226 --rc genhtml_branch_coverage=1 00:08:08.226 --rc genhtml_function_coverage=1 00:08:08.226 --rc genhtml_legend=1 00:08:08.226 --rc geninfo_all_blocks=1 00:08:08.226 --rc geninfo_unexecuted_blocks=1 00:08:08.226 00:08:08.226 ' 00:08:08.226 16:27:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:08.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.226 --rc genhtml_branch_coverage=1 00:08:08.226 --rc genhtml_function_coverage=1 00:08:08.226 --rc genhtml_legend=1 00:08:08.226 --rc geninfo_all_blocks=1 00:08:08.226 --rc geninfo_unexecuted_blocks=1 00:08:08.226 00:08:08.226 ' 00:08:08.226 16:27:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:08.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.226 --rc genhtml_branch_coverage=1 00:08:08.226 --rc genhtml_function_coverage=1 00:08:08.226 --rc genhtml_legend=1 00:08:08.226 --rc geninfo_all_blocks=1 00:08:08.226 --rc geninfo_unexecuted_blocks=1 00:08:08.226 00:08:08.226 ' 00:08:08.226 16:27:45 -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:08.226 16:27:45 -- nvmf/common.sh@7 -- # uname -s 00:08:08.226 16:27:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:08.226 16:27:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:08.226 16:27:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:08.226 16:27:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:08.226 16:27:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:08.226 16:27:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:08.226 16:27:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:08.226 16:27:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:08.226 16:27:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:08.226 16:27:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:08.226 16:27:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:08:08.226 16:27:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:08:08.226 16:27:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:08.226 16:27:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:08.226 16:27:45 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:08.226 16:27:45 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:08.226 16:27:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:08.226 16:27:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:08.226 16:27:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:08.226 16:27:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.226 16:27:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.226 16:27:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.226 16:27:45 -- paths/export.sh@5 -- # export PATH 00:08:08.226 16:27:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.226 16:27:45 -- nvmf/common.sh@46 -- # : 0 00:08:08.226 16:27:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:08.226 16:27:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:08.226 16:27:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:08.226 16:27:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:08.226 16:27:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:08.226 16:27:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:08.226 16:27:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:08.226 16:27:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:08.226 16:27:45 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:08.226 16:27:45 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:08.226 16:27:45 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:08.226 16:27:45 -- target/discovery.sh@15 -- # hash nvme 00:08:08.226 16:27:45 -- target/discovery.sh@20 -- # nvmftestinit 00:08:08.226 16:27:45 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:08.226 16:27:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:08.226 16:27:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:08.226 16:27:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:08.226 16:27:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:08.226 16:27:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:08.226 16:27:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:08.226 16:27:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:08.226 16:27:45 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:08.226 16:27:45 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:08.226 16:27:45 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:08.226 16:27:45 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:08.226 16:27:45 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:08.226 16:27:45 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:08.226 16:27:45 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:08.226 16:27:45 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:08.226 16:27:45 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:08.226 16:27:45 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:08.226 16:27:45 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:08.226 16:27:45 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:08.226 16:27:45 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:08.226 16:27:45 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:08.226 16:27:45 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:08.226 16:27:45 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:08.226 16:27:45 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:08.226 16:27:45 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:08.226 16:27:45 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:08.485 16:27:45 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:08.485 Cannot find device "nvmf_tgt_br" 00:08:08.485 16:27:45 -- nvmf/common.sh@154 -- # true 00:08:08.485 16:27:45 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:08.485 Cannot find device "nvmf_tgt_br2" 00:08:08.485 16:27:45 -- nvmf/common.sh@155 -- # true 00:08:08.485 16:27:45 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:08.485 16:27:45 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:08.485 Cannot find device "nvmf_tgt_br" 00:08:08.485 16:27:45 -- nvmf/common.sh@157 -- # true 00:08:08.485 16:27:45 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:08.485 Cannot find device "nvmf_tgt_br2" 00:08:08.485 16:27:45 -- nvmf/common.sh@158 -- # true 00:08:08.485 16:27:45 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:08.485 16:27:45 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:08.485 16:27:45 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:08.485 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:08.485 16:27:45 -- nvmf/common.sh@161 -- # true 00:08:08.485 16:27:45 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:08.485 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:08.485 16:27:45 -- nvmf/common.sh@162 -- # true 00:08:08.485 16:27:45 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:08.485 16:27:45 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:08.485 16:27:45 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:08.485 16:27:45 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:08.485 16:27:45 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:08.485 16:27:45 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:08.485 16:27:45 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:08.485 16:27:45 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:08.485 16:27:45 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:08.485 16:27:45 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:08.485 16:27:45 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:08.485 16:27:45 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:08.485 16:27:45 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:08.485 16:27:45 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:08.485 16:27:45 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:08.485 16:27:45 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:08.485 16:27:45 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:08.485 16:27:45 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:08.485 16:27:45 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:08.744 16:27:45 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:08.744 16:27:45 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:08.744 16:27:46 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:08.744 16:27:46 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:08.744 16:27:46 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:08.744 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:08.744 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.098 ms 00:08:08.744 00:08:08.744 --- 10.0.0.2 ping statistics --- 00:08:08.744 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.744 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:08:08.744 16:27:46 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:08.744 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:08.744 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:08:08.744 00:08:08.744 --- 10.0.0.3 ping statistics --- 00:08:08.744 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.744 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:08:08.744 16:27:46 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:08.744 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:08.744 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:08:08.744 00:08:08.744 --- 10.0.0.1 ping statistics --- 00:08:08.744 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.744 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:08:08.744 16:27:46 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:08.744 16:27:46 -- nvmf/common.sh@421 -- # return 0 00:08:08.744 16:27:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:08.744 16:27:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:08.744 16:27:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:08.744 16:27:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:08.744 16:27:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:08.744 16:27:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:08.744 16:27:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:08.744 16:27:46 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:08.744 16:27:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:08.744 16:27:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:08.744 16:27:46 -- common/autotest_common.sh@10 -- # set +x 00:08:08.744 16:27:46 -- nvmf/common.sh@469 -- # nvmfpid=73488 00:08:08.744 16:27:46 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:08.744 16:27:46 -- nvmf/common.sh@470 -- # waitforlisten 73488 00:08:08.744 16:27:46 -- common/autotest_common.sh@829 -- # '[' -z 73488 ']' 00:08:08.744 16:27:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:08.744 16:27:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:08.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:08.744 16:27:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:08.744 16:27:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:08.744 16:27:46 -- common/autotest_common.sh@10 -- # set +x 00:08:08.744 [2024-11-16 16:27:46.103524] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:08.744 [2024-11-16 16:27:46.103936] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:09.003 [2024-11-16 16:27:46.241630] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:09.003 [2024-11-16 16:27:46.315169] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:09.003 [2024-11-16 16:27:46.315356] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:09.003 [2024-11-16 16:27:46.315374] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:09.003 [2024-11-16 16:27:46.315386] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:09.003 [2024-11-16 16:27:46.315552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:09.003 [2024-11-16 16:27:46.316287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:09.003 [2024-11-16 16:27:46.316441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:09.003 [2024-11-16 16:27:46.316451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.940 16:27:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:09.940 16:27:47 -- common/autotest_common.sh@862 -- # return 0 00:08:09.940 16:27:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:09.940 16:27:47 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:09.940 16:27:47 -- common/autotest_common.sh@10 -- # set +x 00:08:09.940 16:27:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:09.940 16:27:47 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:09.940 16:27:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.940 16:27:47 -- common/autotest_common.sh@10 -- # set +x 00:08:09.940 [2024-11-16 16:27:47.204047] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:09.940 16:27:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.940 16:27:47 -- target/discovery.sh@26 -- # seq 1 4 00:08:09.940 16:27:47 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:09.940 16:27:47 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:09.940 16:27:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.940 16:27:47 -- common/autotest_common.sh@10 -- # set +x 00:08:09.940 Null1 00:08:09.940 16:27:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.940 16:27:47 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:09.940 16:27:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.940 16:27:47 -- common/autotest_common.sh@10 -- # set +x 00:08:09.940 16:27:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.940 16:27:47 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:09.940 16:27:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.940 16:27:47 -- common/autotest_common.sh@10 -- # set +x 00:08:09.940 16:27:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.940 16:27:47 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:09.940 16:27:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.940 16:27:47 -- common/autotest_common.sh@10 -- # set +x 00:08:09.940 [2024-11-16 16:27:47.261205] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:09.940 16:27:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.940 16:27:47 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:09.940 16:27:47 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:09.940 16:27:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.940 16:27:47 -- common/autotest_common.sh@10 -- # set +x 00:08:09.940 Null2 00:08:09.940 16:27:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.940 16:27:47 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:09.940 16:27:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.940 16:27:47 -- common/autotest_common.sh@10 -- # set +x 00:08:09.940 16:27:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.940 16:27:47 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:09.940 16:27:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.940 16:27:47 -- common/autotest_common.sh@10 -- # set +x 00:08:09.940 16:27:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.941 16:27:47 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:09.941 16:27:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.941 16:27:47 -- common/autotest_common.sh@10 -- # set +x 00:08:09.941 16:27:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.941 16:27:47 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:09.941 16:27:47 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:09.941 16:27:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.941 16:27:47 -- common/autotest_common.sh@10 -- # set +x 00:08:09.941 Null3 00:08:09.941 16:27:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.941 16:27:47 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:09.941 16:27:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.941 16:27:47 -- common/autotest_common.sh@10 -- # set +x 00:08:09.941 16:27:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.941 16:27:47 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:09.941 16:27:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.941 16:27:47 -- common/autotest_common.sh@10 -- # set +x 00:08:09.941 16:27:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.941 16:27:47 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:09.941 16:27:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.941 16:27:47 -- common/autotest_common.sh@10 -- # set +x 00:08:09.941 16:27:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.941 16:27:47 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:09.941 16:27:47 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:09.941 16:27:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.941 16:27:47 -- common/autotest_common.sh@10 -- # set +x 00:08:09.941 Null4 00:08:09.941 16:27:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.941 16:27:47 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:09.941 16:27:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.941 16:27:47 -- common/autotest_common.sh@10 -- # set +x 00:08:09.941 16:27:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.941 16:27:47 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:09.941 16:27:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.941 16:27:47 -- common/autotest_common.sh@10 -- # set +x 00:08:09.941 16:27:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.941 16:27:47 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:09.941 16:27:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.941 16:27:47 -- common/autotest_common.sh@10 -- # set +x 00:08:09.941 16:27:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.941 16:27:47 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:09.941 16:27:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.941 16:27:47 -- common/autotest_common.sh@10 -- # set +x 00:08:09.941 16:27:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.941 16:27:47 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:09.941 16:27:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.941 16:27:47 -- common/autotest_common.sh@10 -- # set +x 00:08:09.941 16:27:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.941 16:27:47 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 --hostid=dcaf3c85-349e-474a-91c8-b5dfcb47b007 -t tcp -a 10.0.0.2 -s 4420 00:08:10.200 00:08:10.200 Discovery Log Number of Records 6, Generation counter 6 00:08:10.200 =====Discovery Log Entry 0====== 00:08:10.200 trtype: tcp 00:08:10.200 adrfam: ipv4 00:08:10.200 subtype: current discovery subsystem 00:08:10.200 treq: not required 00:08:10.200 portid: 0 00:08:10.200 trsvcid: 4420 00:08:10.200 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:10.200 traddr: 10.0.0.2 00:08:10.200 eflags: explicit discovery connections, duplicate discovery information 00:08:10.200 sectype: none 00:08:10.200 =====Discovery Log Entry 1====== 00:08:10.200 trtype: tcp 00:08:10.200 adrfam: ipv4 00:08:10.200 subtype: nvme subsystem 00:08:10.200 treq: not required 00:08:10.200 portid: 0 00:08:10.200 trsvcid: 4420 00:08:10.200 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:10.200 traddr: 10.0.0.2 00:08:10.200 eflags: none 00:08:10.200 sectype: none 00:08:10.200 =====Discovery Log Entry 2====== 00:08:10.200 trtype: tcp 00:08:10.200 adrfam: ipv4 00:08:10.200 subtype: nvme subsystem 00:08:10.200 treq: not required 00:08:10.200 portid: 0 00:08:10.200 trsvcid: 4420 00:08:10.200 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:10.200 traddr: 10.0.0.2 00:08:10.200 eflags: none 00:08:10.200 sectype: none 00:08:10.200 =====Discovery Log Entry 3====== 00:08:10.200 trtype: tcp 00:08:10.200 adrfam: ipv4 00:08:10.200 subtype: nvme subsystem 00:08:10.200 treq: not required 00:08:10.200 portid: 0 00:08:10.200 trsvcid: 4420 00:08:10.200 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:10.200 traddr: 10.0.0.2 00:08:10.200 eflags: none 00:08:10.200 sectype: none 00:08:10.200 =====Discovery Log Entry 4====== 00:08:10.200 trtype: tcp 00:08:10.200 adrfam: ipv4 00:08:10.200 subtype: nvme subsystem 00:08:10.200 treq: not required 00:08:10.200 portid: 0 00:08:10.200 trsvcid: 4420 00:08:10.200 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:10.200 traddr: 10.0.0.2 00:08:10.200 eflags: none 00:08:10.200 sectype: none 00:08:10.200 =====Discovery Log Entry 5====== 00:08:10.200 trtype: tcp 00:08:10.200 adrfam: ipv4 00:08:10.200 subtype: discovery subsystem referral 00:08:10.200 treq: not required 00:08:10.200 portid: 0 00:08:10.200 trsvcid: 4430 00:08:10.200 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:10.200 traddr: 10.0.0.2 00:08:10.200 eflags: none 00:08:10.200 sectype: none 00:08:10.200 Perform nvmf subsystem discovery via RPC 00:08:10.200 16:27:47 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:10.200 16:27:47 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:10.200 16:27:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.200 16:27:47 -- common/autotest_common.sh@10 -- # set +x 00:08:10.201 [2024-11-16 16:27:47.493229] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:08:10.201 [ 00:08:10.201 { 00:08:10.201 "allow_any_host": true, 00:08:10.201 "hosts": [], 00:08:10.201 "listen_addresses": [ 00:08:10.201 { 00:08:10.201 "adrfam": "IPv4", 00:08:10.201 "traddr": "10.0.0.2", 00:08:10.201 "transport": "TCP", 00:08:10.201 "trsvcid": "4420", 00:08:10.201 "trtype": "TCP" 00:08:10.201 } 00:08:10.201 ], 00:08:10.201 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:10.201 "subtype": "Discovery" 00:08:10.201 }, 00:08:10.201 { 00:08:10.201 "allow_any_host": true, 00:08:10.201 "hosts": [], 00:08:10.201 "listen_addresses": [ 00:08:10.201 { 00:08:10.201 "adrfam": "IPv4", 00:08:10.201 "traddr": "10.0.0.2", 00:08:10.201 "transport": "TCP", 00:08:10.201 "trsvcid": "4420", 00:08:10.201 "trtype": "TCP" 00:08:10.201 } 00:08:10.201 ], 00:08:10.201 "max_cntlid": 65519, 00:08:10.201 "max_namespaces": 32, 00:08:10.201 "min_cntlid": 1, 00:08:10.201 "model_number": "SPDK bdev Controller", 00:08:10.201 "namespaces": [ 00:08:10.201 { 00:08:10.201 "bdev_name": "Null1", 00:08:10.201 "name": "Null1", 00:08:10.201 "nguid": "FFBB767BD5694E99A5C6040CE354817F", 00:08:10.201 "nsid": 1, 00:08:10.201 "uuid": "ffbb767b-d569-4e99-a5c6-040ce354817f" 00:08:10.201 } 00:08:10.201 ], 00:08:10.201 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:10.201 "serial_number": "SPDK00000000000001", 00:08:10.201 "subtype": "NVMe" 00:08:10.201 }, 00:08:10.201 { 00:08:10.201 "allow_any_host": true, 00:08:10.201 "hosts": [], 00:08:10.201 "listen_addresses": [ 00:08:10.201 { 00:08:10.201 "adrfam": "IPv4", 00:08:10.201 "traddr": "10.0.0.2", 00:08:10.201 "transport": "TCP", 00:08:10.201 "trsvcid": "4420", 00:08:10.201 "trtype": "TCP" 00:08:10.201 } 00:08:10.201 ], 00:08:10.201 "max_cntlid": 65519, 00:08:10.201 "max_namespaces": 32, 00:08:10.201 "min_cntlid": 1, 00:08:10.201 "model_number": "SPDK bdev Controller", 00:08:10.201 "namespaces": [ 00:08:10.201 { 00:08:10.201 "bdev_name": "Null2", 00:08:10.201 "name": "Null2", 00:08:10.201 "nguid": "4DA2F8890FAA42CC97BE8DADF758A438", 00:08:10.201 "nsid": 1, 00:08:10.201 "uuid": "4da2f889-0faa-42cc-97be-8dadf758a438" 00:08:10.201 } 00:08:10.201 ], 00:08:10.201 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:10.201 "serial_number": "SPDK00000000000002", 00:08:10.201 "subtype": "NVMe" 00:08:10.201 }, 00:08:10.201 { 00:08:10.201 "allow_any_host": true, 00:08:10.201 "hosts": [], 00:08:10.201 "listen_addresses": [ 00:08:10.201 { 00:08:10.201 "adrfam": "IPv4", 00:08:10.201 "traddr": "10.0.0.2", 00:08:10.201 "transport": "TCP", 00:08:10.201 "trsvcid": "4420", 00:08:10.201 "trtype": "TCP" 00:08:10.201 } 00:08:10.201 ], 00:08:10.201 "max_cntlid": 65519, 00:08:10.201 "max_namespaces": 32, 00:08:10.201 "min_cntlid": 1, 00:08:10.201 "model_number": "SPDK bdev Controller", 00:08:10.201 "namespaces": [ 00:08:10.201 { 00:08:10.201 "bdev_name": "Null3", 00:08:10.201 "name": "Null3", 00:08:10.201 "nguid": "9491D011D1974633AB7753BDEE5C53A4", 00:08:10.201 "nsid": 1, 00:08:10.201 "uuid": "9491d011-d197-4633-ab77-53bdee5c53a4" 00:08:10.201 } 00:08:10.201 ], 00:08:10.201 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:10.201 "serial_number": "SPDK00000000000003", 00:08:10.201 "subtype": "NVMe" 00:08:10.201 }, 00:08:10.201 { 00:08:10.201 "allow_any_host": true, 00:08:10.201 "hosts": [], 00:08:10.201 "listen_addresses": [ 00:08:10.201 { 00:08:10.201 "adrfam": "IPv4", 00:08:10.201 "traddr": "10.0.0.2", 00:08:10.201 "transport": "TCP", 00:08:10.201 "trsvcid": "4420", 00:08:10.201 "trtype": "TCP" 00:08:10.201 } 00:08:10.201 ], 00:08:10.201 "max_cntlid": 65519, 00:08:10.201 "max_namespaces": 32, 00:08:10.201 "min_cntlid": 1, 00:08:10.201 "model_number": "SPDK bdev Controller", 00:08:10.201 "namespaces": [ 00:08:10.201 { 00:08:10.201 "bdev_name": "Null4", 00:08:10.201 "name": "Null4", 00:08:10.201 "nguid": "321B4ECC55C94FC1809F32855209D277", 00:08:10.201 "nsid": 1, 00:08:10.201 "uuid": "321b4ecc-55c9-4fc1-809f-32855209d277" 00:08:10.201 } 00:08:10.201 ], 00:08:10.201 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:10.201 "serial_number": "SPDK00000000000004", 00:08:10.201 "subtype": "NVMe" 00:08:10.201 } 00:08:10.201 ] 00:08:10.201 16:27:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.201 16:27:47 -- target/discovery.sh@42 -- # seq 1 4 00:08:10.201 16:27:47 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:10.201 16:27:47 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:10.201 16:27:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.201 16:27:47 -- common/autotest_common.sh@10 -- # set +x 00:08:10.201 16:27:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.201 16:27:47 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:10.201 16:27:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.201 16:27:47 -- common/autotest_common.sh@10 -- # set +x 00:08:10.201 16:27:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.201 16:27:47 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:10.201 16:27:47 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:10.201 16:27:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.201 16:27:47 -- common/autotest_common.sh@10 -- # set +x 00:08:10.201 16:27:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.201 16:27:47 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:10.201 16:27:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.201 16:27:47 -- common/autotest_common.sh@10 -- # set +x 00:08:10.201 16:27:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.201 16:27:47 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:10.201 16:27:47 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:10.201 16:27:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.201 16:27:47 -- common/autotest_common.sh@10 -- # set +x 00:08:10.201 16:27:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.201 16:27:47 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:10.201 16:27:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.201 16:27:47 -- common/autotest_common.sh@10 -- # set +x 00:08:10.201 16:27:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.201 16:27:47 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:10.201 16:27:47 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:10.201 16:27:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.201 16:27:47 -- common/autotest_common.sh@10 -- # set +x 00:08:10.201 16:27:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.201 16:27:47 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:10.201 16:27:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.201 16:27:47 -- common/autotest_common.sh@10 -- # set +x 00:08:10.201 16:27:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.201 16:27:47 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:10.201 16:27:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.201 16:27:47 -- common/autotest_common.sh@10 -- # set +x 00:08:10.201 16:27:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.201 16:27:47 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:10.201 16:27:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.201 16:27:47 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:10.201 16:27:47 -- common/autotest_common.sh@10 -- # set +x 00:08:10.201 16:27:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.201 16:27:47 -- target/discovery.sh@49 -- # check_bdevs= 00:08:10.201 16:27:47 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:10.201 16:27:47 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:10.201 16:27:47 -- target/discovery.sh@57 -- # nvmftestfini 00:08:10.201 16:27:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:10.201 16:27:47 -- nvmf/common.sh@116 -- # sync 00:08:10.201 16:27:47 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:10.201 16:27:47 -- nvmf/common.sh@119 -- # set +e 00:08:10.201 16:27:47 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:10.201 16:27:47 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:10.201 rmmod nvme_tcp 00:08:10.460 rmmod nvme_fabrics 00:08:10.460 rmmod nvme_keyring 00:08:10.460 16:27:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:10.460 16:27:47 -- nvmf/common.sh@123 -- # set -e 00:08:10.460 16:27:47 -- nvmf/common.sh@124 -- # return 0 00:08:10.460 16:27:47 -- nvmf/common.sh@477 -- # '[' -n 73488 ']' 00:08:10.460 16:27:47 -- nvmf/common.sh@478 -- # killprocess 73488 00:08:10.460 16:27:47 -- common/autotest_common.sh@936 -- # '[' -z 73488 ']' 00:08:10.460 16:27:47 -- common/autotest_common.sh@940 -- # kill -0 73488 00:08:10.460 16:27:47 -- common/autotest_common.sh@941 -- # uname 00:08:10.460 16:27:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:10.460 16:27:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73488 00:08:10.460 killing process with pid 73488 00:08:10.460 16:27:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:10.460 16:27:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:10.460 16:27:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73488' 00:08:10.460 16:27:47 -- common/autotest_common.sh@955 -- # kill 73488 00:08:10.460 [2024-11-16 16:27:47.788405] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:08:10.460 16:27:47 -- common/autotest_common.sh@960 -- # wait 73488 00:08:10.719 16:27:48 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:10.719 16:27:48 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:10.719 16:27:48 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:10.719 16:27:48 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:10.719 16:27:48 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:10.719 16:27:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:10.719 16:27:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:10.719 16:27:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:10.719 16:27:48 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:10.719 00:08:10.719 real 0m2.593s 00:08:10.719 user 0m7.276s 00:08:10.719 sys 0m0.658s 00:08:10.719 ************************************ 00:08:10.719 END TEST nvmf_discovery 00:08:10.719 ************************************ 00:08:10.719 16:27:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:10.719 16:27:48 -- common/autotest_common.sh@10 -- # set +x 00:08:10.719 16:27:48 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:10.719 16:27:48 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:10.719 16:27:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:10.719 16:27:48 -- common/autotest_common.sh@10 -- # set +x 00:08:10.719 ************************************ 00:08:10.719 START TEST nvmf_referrals 00:08:10.719 ************************************ 00:08:10.719 16:27:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:10.978 * Looking for test storage... 00:08:10.978 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:10.978 16:27:48 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:10.978 16:27:48 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:10.978 16:27:48 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:10.978 16:27:48 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:10.978 16:27:48 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:10.978 16:27:48 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:10.978 16:27:48 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:10.978 16:27:48 -- scripts/common.sh@335 -- # IFS=.-: 00:08:10.978 16:27:48 -- scripts/common.sh@335 -- # read -ra ver1 00:08:10.978 16:27:48 -- scripts/common.sh@336 -- # IFS=.-: 00:08:10.978 16:27:48 -- scripts/common.sh@336 -- # read -ra ver2 00:08:10.978 16:27:48 -- scripts/common.sh@337 -- # local 'op=<' 00:08:10.978 16:27:48 -- scripts/common.sh@339 -- # ver1_l=2 00:08:10.978 16:27:48 -- scripts/common.sh@340 -- # ver2_l=1 00:08:10.978 16:27:48 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:10.978 16:27:48 -- scripts/common.sh@343 -- # case "$op" in 00:08:10.978 16:27:48 -- scripts/common.sh@344 -- # : 1 00:08:10.978 16:27:48 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:10.978 16:27:48 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:10.978 16:27:48 -- scripts/common.sh@364 -- # decimal 1 00:08:10.978 16:27:48 -- scripts/common.sh@352 -- # local d=1 00:08:10.978 16:27:48 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:10.978 16:27:48 -- scripts/common.sh@354 -- # echo 1 00:08:10.978 16:27:48 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:10.978 16:27:48 -- scripts/common.sh@365 -- # decimal 2 00:08:10.978 16:27:48 -- scripts/common.sh@352 -- # local d=2 00:08:10.978 16:27:48 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:10.978 16:27:48 -- scripts/common.sh@354 -- # echo 2 00:08:10.978 16:27:48 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:10.978 16:27:48 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:10.978 16:27:48 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:10.979 16:27:48 -- scripts/common.sh@367 -- # return 0 00:08:10.979 16:27:48 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:10.979 16:27:48 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:10.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.979 --rc genhtml_branch_coverage=1 00:08:10.979 --rc genhtml_function_coverage=1 00:08:10.979 --rc genhtml_legend=1 00:08:10.979 --rc geninfo_all_blocks=1 00:08:10.979 --rc geninfo_unexecuted_blocks=1 00:08:10.979 00:08:10.979 ' 00:08:10.979 16:27:48 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:10.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.979 --rc genhtml_branch_coverage=1 00:08:10.979 --rc genhtml_function_coverage=1 00:08:10.979 --rc genhtml_legend=1 00:08:10.979 --rc geninfo_all_blocks=1 00:08:10.979 --rc geninfo_unexecuted_blocks=1 00:08:10.979 00:08:10.979 ' 00:08:10.979 16:27:48 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:10.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.979 --rc genhtml_branch_coverage=1 00:08:10.979 --rc genhtml_function_coverage=1 00:08:10.979 --rc genhtml_legend=1 00:08:10.979 --rc geninfo_all_blocks=1 00:08:10.979 --rc geninfo_unexecuted_blocks=1 00:08:10.979 00:08:10.979 ' 00:08:10.979 16:27:48 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:10.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.979 --rc genhtml_branch_coverage=1 00:08:10.979 --rc genhtml_function_coverage=1 00:08:10.979 --rc genhtml_legend=1 00:08:10.979 --rc geninfo_all_blocks=1 00:08:10.979 --rc geninfo_unexecuted_blocks=1 00:08:10.979 00:08:10.979 ' 00:08:10.979 16:27:48 -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:10.979 16:27:48 -- nvmf/common.sh@7 -- # uname -s 00:08:10.979 16:27:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:10.979 16:27:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:10.979 16:27:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:10.979 16:27:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:10.979 16:27:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:10.979 16:27:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:10.979 16:27:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:10.979 16:27:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:10.979 16:27:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:10.979 16:27:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:10.979 16:27:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:08:10.979 16:27:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:08:10.979 16:27:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:10.979 16:27:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:10.979 16:27:48 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:10.979 16:27:48 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:10.979 16:27:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:10.979 16:27:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:10.979 16:27:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:10.979 16:27:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.979 16:27:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.979 16:27:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.979 16:27:48 -- paths/export.sh@5 -- # export PATH 00:08:10.979 16:27:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.979 16:27:48 -- nvmf/common.sh@46 -- # : 0 00:08:10.979 16:27:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:10.979 16:27:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:10.979 16:27:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:10.979 16:27:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:10.979 16:27:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:10.979 16:27:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:10.979 16:27:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:10.979 16:27:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:10.979 16:27:48 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:10.979 16:27:48 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:10.979 16:27:48 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:10.979 16:27:48 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:10.979 16:27:48 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:10.979 16:27:48 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:10.979 16:27:48 -- target/referrals.sh@37 -- # nvmftestinit 00:08:10.979 16:27:48 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:10.979 16:27:48 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:10.979 16:27:48 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:10.979 16:27:48 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:10.979 16:27:48 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:10.979 16:27:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:10.979 16:27:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:10.979 16:27:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:10.979 16:27:48 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:10.979 16:27:48 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:10.979 16:27:48 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:10.979 16:27:48 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:10.979 16:27:48 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:10.979 16:27:48 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:10.979 16:27:48 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:10.979 16:27:48 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:10.979 16:27:48 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:10.979 16:27:48 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:10.979 16:27:48 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:10.979 16:27:48 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:10.979 16:27:48 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:10.979 16:27:48 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:10.979 16:27:48 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:10.979 16:27:48 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:10.979 16:27:48 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:10.979 16:27:48 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:10.979 16:27:48 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:10.979 16:27:48 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:10.979 Cannot find device "nvmf_tgt_br" 00:08:10.979 16:27:48 -- nvmf/common.sh@154 -- # true 00:08:10.979 16:27:48 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:10.979 Cannot find device "nvmf_tgt_br2" 00:08:10.979 16:27:48 -- nvmf/common.sh@155 -- # true 00:08:10.979 16:27:48 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:10.979 16:27:48 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:10.979 Cannot find device "nvmf_tgt_br" 00:08:10.979 16:27:48 -- nvmf/common.sh@157 -- # true 00:08:10.979 16:27:48 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:10.979 Cannot find device "nvmf_tgt_br2" 00:08:10.979 16:27:48 -- nvmf/common.sh@158 -- # true 00:08:10.979 16:27:48 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:11.238 16:27:48 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:11.238 16:27:48 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:11.238 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:11.238 16:27:48 -- nvmf/common.sh@161 -- # true 00:08:11.238 16:27:48 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:11.238 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:11.238 16:27:48 -- nvmf/common.sh@162 -- # true 00:08:11.238 16:27:48 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:11.238 16:27:48 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:11.238 16:27:48 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:11.238 16:27:48 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:11.238 16:27:48 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:11.238 16:27:48 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:11.238 16:27:48 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:11.238 16:27:48 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:11.238 16:27:48 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:11.238 16:27:48 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:11.238 16:27:48 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:11.238 16:27:48 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:11.238 16:27:48 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:11.238 16:27:48 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:11.238 16:27:48 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:11.238 16:27:48 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:11.238 16:27:48 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:11.238 16:27:48 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:11.239 16:27:48 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:11.239 16:27:48 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:11.239 16:27:48 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:11.239 16:27:48 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:11.239 16:27:48 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:11.239 16:27:48 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:11.239 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:11.239 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:08:11.239 00:08:11.239 --- 10.0.0.2 ping statistics --- 00:08:11.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:11.239 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:08:11.239 16:27:48 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:11.239 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:11.239 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:08:11.239 00:08:11.239 --- 10.0.0.3 ping statistics --- 00:08:11.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:11.239 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:08:11.239 16:27:48 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:11.239 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:11.239 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:08:11.239 00:08:11.239 --- 10.0.0.1 ping statistics --- 00:08:11.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:11.239 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:08:11.239 16:27:48 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:11.239 16:27:48 -- nvmf/common.sh@421 -- # return 0 00:08:11.239 16:27:48 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:11.239 16:27:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:11.239 16:27:48 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:11.239 16:27:48 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:11.239 16:27:48 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:11.239 16:27:48 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:11.239 16:27:48 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:11.239 16:27:48 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:11.239 16:27:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:11.239 16:27:48 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:11.239 16:27:48 -- common/autotest_common.sh@10 -- # set +x 00:08:11.239 16:27:48 -- nvmf/common.sh@469 -- # nvmfpid=73718 00:08:11.239 16:27:48 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:11.239 16:27:48 -- nvmf/common.sh@470 -- # waitforlisten 73718 00:08:11.239 16:27:48 -- common/autotest_common.sh@829 -- # '[' -z 73718 ']' 00:08:11.239 16:27:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:11.239 16:27:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:11.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:11.239 16:27:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:11.239 16:27:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:11.239 16:27:48 -- common/autotest_common.sh@10 -- # set +x 00:08:11.498 [2024-11-16 16:27:48.756024] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:11.498 [2024-11-16 16:27:48.756137] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:11.498 [2024-11-16 16:27:48.897679] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:11.498 [2024-11-16 16:27:48.973595] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:11.498 [2024-11-16 16:27:48.974093] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:11.498 [2024-11-16 16:27:48.974153] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:11.498 [2024-11-16 16:27:48.974459] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:11.498 [2024-11-16 16:27:48.974711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:11.498 [2024-11-16 16:27:48.974994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.498 [2024-11-16 16:27:48.974998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:11.498 [2024-11-16 16:27:48.974885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:12.434 16:27:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:12.434 16:27:49 -- common/autotest_common.sh@862 -- # return 0 00:08:12.434 16:27:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:12.434 16:27:49 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:12.434 16:27:49 -- common/autotest_common.sh@10 -- # set +x 00:08:12.434 16:27:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:12.434 16:27:49 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:12.434 16:27:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.434 16:27:49 -- common/autotest_common.sh@10 -- # set +x 00:08:12.434 [2024-11-16 16:27:49.835948] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:12.434 16:27:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.434 16:27:49 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:12.434 16:27:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.434 16:27:49 -- common/autotest_common.sh@10 -- # set +x 00:08:12.434 [2024-11-16 16:27:49.862249] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:12.434 16:27:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.434 16:27:49 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:12.434 16:27:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.434 16:27:49 -- common/autotest_common.sh@10 -- # set +x 00:08:12.434 16:27:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.434 16:27:49 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:12.434 16:27:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.434 16:27:49 -- common/autotest_common.sh@10 -- # set +x 00:08:12.434 16:27:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.434 16:27:49 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:12.434 16:27:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.434 16:27:49 -- common/autotest_common.sh@10 -- # set +x 00:08:12.434 16:27:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.434 16:27:49 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:12.434 16:27:49 -- target/referrals.sh@48 -- # jq length 00:08:12.434 16:27:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.434 16:27:49 -- common/autotest_common.sh@10 -- # set +x 00:08:12.434 16:27:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.692 16:27:49 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:12.692 16:27:49 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:12.692 16:27:49 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:12.692 16:27:49 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:12.692 16:27:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.692 16:27:49 -- common/autotest_common.sh@10 -- # set +x 00:08:12.692 16:27:49 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:12.692 16:27:49 -- target/referrals.sh@21 -- # sort 00:08:12.692 16:27:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.692 16:27:50 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:12.692 16:27:50 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:12.692 16:27:50 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:12.692 16:27:50 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:12.692 16:27:50 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:12.692 16:27:50 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 --hostid=dcaf3c85-349e-474a-91c8-b5dfcb47b007 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:12.692 16:27:50 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:12.692 16:27:50 -- target/referrals.sh@26 -- # sort 00:08:12.692 16:27:50 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:12.692 16:27:50 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:12.692 16:27:50 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:12.692 16:27:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.692 16:27:50 -- common/autotest_common.sh@10 -- # set +x 00:08:12.692 16:27:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.692 16:27:50 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:12.692 16:27:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.692 16:27:50 -- common/autotest_common.sh@10 -- # set +x 00:08:12.692 16:27:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.692 16:27:50 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:12.692 16:27:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.692 16:27:50 -- common/autotest_common.sh@10 -- # set +x 00:08:12.692 16:27:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.692 16:27:50 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:12.692 16:27:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.692 16:27:50 -- common/autotest_common.sh@10 -- # set +x 00:08:12.692 16:27:50 -- target/referrals.sh@56 -- # jq length 00:08:12.692 16:27:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.951 16:27:50 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:12.951 16:27:50 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:12.951 16:27:50 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:12.951 16:27:50 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:12.951 16:27:50 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 --hostid=dcaf3c85-349e-474a-91c8-b5dfcb47b007 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:12.951 16:27:50 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:12.951 16:27:50 -- target/referrals.sh@26 -- # sort 00:08:12.951 16:27:50 -- target/referrals.sh@26 -- # echo 00:08:12.951 16:27:50 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:12.951 16:27:50 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:12.951 16:27:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.951 16:27:50 -- common/autotest_common.sh@10 -- # set +x 00:08:12.951 16:27:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.951 16:27:50 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:12.951 16:27:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.951 16:27:50 -- common/autotest_common.sh@10 -- # set +x 00:08:12.951 16:27:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.951 16:27:50 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:12.951 16:27:50 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:12.951 16:27:50 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:12.951 16:27:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.951 16:27:50 -- common/autotest_common.sh@10 -- # set +x 00:08:12.951 16:27:50 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:12.951 16:27:50 -- target/referrals.sh@21 -- # sort 00:08:12.951 16:27:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.210 16:27:50 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:13.210 16:27:50 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:13.210 16:27:50 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:13.210 16:27:50 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:13.210 16:27:50 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:13.210 16:27:50 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 --hostid=dcaf3c85-349e-474a-91c8-b5dfcb47b007 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:13.210 16:27:50 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:13.210 16:27:50 -- target/referrals.sh@26 -- # sort 00:08:13.210 16:27:50 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:13.210 16:27:50 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:13.210 16:27:50 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:13.210 16:27:50 -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:13.210 16:27:50 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:13.210 16:27:50 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 --hostid=dcaf3c85-349e-474a-91c8-b5dfcb47b007 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:13.210 16:27:50 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:13.210 16:27:50 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:13.210 16:27:50 -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:13.210 16:27:50 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:13.210 16:27:50 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:13.210 16:27:50 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 --hostid=dcaf3c85-349e-474a-91c8-b5dfcb47b007 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:13.210 16:27:50 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:13.469 16:27:50 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:13.469 16:27:50 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:13.469 16:27:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.469 16:27:50 -- common/autotest_common.sh@10 -- # set +x 00:08:13.469 16:27:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.469 16:27:50 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:13.469 16:27:50 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:13.469 16:27:50 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:13.469 16:27:50 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:13.469 16:27:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.469 16:27:50 -- common/autotest_common.sh@10 -- # set +x 00:08:13.469 16:27:50 -- target/referrals.sh@21 -- # sort 00:08:13.469 16:27:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.469 16:27:50 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:13.469 16:27:50 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:13.469 16:27:50 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:13.469 16:27:50 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:13.469 16:27:50 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:13.469 16:27:50 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 --hostid=dcaf3c85-349e-474a-91c8-b5dfcb47b007 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:13.469 16:27:50 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:13.469 16:27:50 -- target/referrals.sh@26 -- # sort 00:08:13.728 16:27:50 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:13.728 16:27:50 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:13.728 16:27:50 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:13.728 16:27:50 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:13.728 16:27:50 -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:13.728 16:27:50 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:13.728 16:27:50 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 --hostid=dcaf3c85-349e-474a-91c8-b5dfcb47b007 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:13.728 16:27:51 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:13.728 16:27:51 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:13.728 16:27:51 -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:13.728 16:27:51 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:13.728 16:27:51 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 --hostid=dcaf3c85-349e-474a-91c8-b5dfcb47b007 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:13.728 16:27:51 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:13.728 16:27:51 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:13.728 16:27:51 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:13.728 16:27:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.728 16:27:51 -- common/autotest_common.sh@10 -- # set +x 00:08:13.987 16:27:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.987 16:27:51 -- target/referrals.sh@82 -- # jq length 00:08:13.987 16:27:51 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:13.987 16:27:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.987 16:27:51 -- common/autotest_common.sh@10 -- # set +x 00:08:13.987 16:27:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.987 16:27:51 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:13.987 16:27:51 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:13.987 16:27:51 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:13.987 16:27:51 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:13.987 16:27:51 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:13.987 16:27:51 -- target/referrals.sh@26 -- # sort 00:08:13.987 16:27:51 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 --hostid=dcaf3c85-349e-474a-91c8-b5dfcb47b007 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:13.987 16:27:51 -- target/referrals.sh@26 -- # echo 00:08:13.987 16:27:51 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:13.987 16:27:51 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:13.987 16:27:51 -- target/referrals.sh@86 -- # nvmftestfini 00:08:13.987 16:27:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:13.987 16:27:51 -- nvmf/common.sh@116 -- # sync 00:08:14.246 16:27:51 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:14.246 16:27:51 -- nvmf/common.sh@119 -- # set +e 00:08:14.246 16:27:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:14.246 16:27:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:14.246 rmmod nvme_tcp 00:08:14.246 rmmod nvme_fabrics 00:08:14.246 rmmod nvme_keyring 00:08:14.246 16:27:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:14.246 16:27:51 -- nvmf/common.sh@123 -- # set -e 00:08:14.246 16:27:51 -- nvmf/common.sh@124 -- # return 0 00:08:14.246 16:27:51 -- nvmf/common.sh@477 -- # '[' -n 73718 ']' 00:08:14.246 16:27:51 -- nvmf/common.sh@478 -- # killprocess 73718 00:08:14.246 16:27:51 -- common/autotest_common.sh@936 -- # '[' -z 73718 ']' 00:08:14.246 16:27:51 -- common/autotest_common.sh@940 -- # kill -0 73718 00:08:14.246 16:27:51 -- common/autotest_common.sh@941 -- # uname 00:08:14.246 16:27:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:14.246 16:27:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73718 00:08:14.246 killing process with pid 73718 00:08:14.246 16:27:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:14.246 16:27:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:14.246 16:27:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73718' 00:08:14.246 16:27:51 -- common/autotest_common.sh@955 -- # kill 73718 00:08:14.246 16:27:51 -- common/autotest_common.sh@960 -- # wait 73718 00:08:14.505 16:27:51 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:14.505 16:27:51 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:14.505 16:27:51 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:14.505 16:27:51 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:14.505 16:27:51 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:14.505 16:27:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:14.505 16:27:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:14.505 16:27:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:14.505 16:27:51 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:14.505 00:08:14.505 real 0m3.709s 00:08:14.505 user 0m12.455s 00:08:14.505 sys 0m0.940s 00:08:14.505 16:27:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:14.505 16:27:51 -- common/autotest_common.sh@10 -- # set +x 00:08:14.505 ************************************ 00:08:14.505 END TEST nvmf_referrals 00:08:14.505 ************************************ 00:08:14.506 16:27:51 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:14.506 16:27:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:14.506 16:27:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:14.506 16:27:51 -- common/autotest_common.sh@10 -- # set +x 00:08:14.506 ************************************ 00:08:14.506 START TEST nvmf_connect_disconnect 00:08:14.506 ************************************ 00:08:14.506 16:27:51 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:14.765 * Looking for test storage... 00:08:14.765 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:14.765 16:27:52 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:14.765 16:27:52 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:14.765 16:27:52 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:14.765 16:27:52 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:14.765 16:27:52 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:14.765 16:27:52 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:14.765 16:27:52 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:14.765 16:27:52 -- scripts/common.sh@335 -- # IFS=.-: 00:08:14.765 16:27:52 -- scripts/common.sh@335 -- # read -ra ver1 00:08:14.765 16:27:52 -- scripts/common.sh@336 -- # IFS=.-: 00:08:14.765 16:27:52 -- scripts/common.sh@336 -- # read -ra ver2 00:08:14.765 16:27:52 -- scripts/common.sh@337 -- # local 'op=<' 00:08:14.765 16:27:52 -- scripts/common.sh@339 -- # ver1_l=2 00:08:14.765 16:27:52 -- scripts/common.sh@340 -- # ver2_l=1 00:08:14.765 16:27:52 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:14.765 16:27:52 -- scripts/common.sh@343 -- # case "$op" in 00:08:14.765 16:27:52 -- scripts/common.sh@344 -- # : 1 00:08:14.765 16:27:52 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:14.765 16:27:52 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:14.765 16:27:52 -- scripts/common.sh@364 -- # decimal 1 00:08:14.765 16:27:52 -- scripts/common.sh@352 -- # local d=1 00:08:14.765 16:27:52 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:14.765 16:27:52 -- scripts/common.sh@354 -- # echo 1 00:08:14.765 16:27:52 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:14.765 16:27:52 -- scripts/common.sh@365 -- # decimal 2 00:08:14.765 16:27:52 -- scripts/common.sh@352 -- # local d=2 00:08:14.765 16:27:52 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:14.765 16:27:52 -- scripts/common.sh@354 -- # echo 2 00:08:14.765 16:27:52 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:14.765 16:27:52 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:14.765 16:27:52 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:14.765 16:27:52 -- scripts/common.sh@367 -- # return 0 00:08:14.765 16:27:52 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:14.765 16:27:52 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:14.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.765 --rc genhtml_branch_coverage=1 00:08:14.765 --rc genhtml_function_coverage=1 00:08:14.765 --rc genhtml_legend=1 00:08:14.765 --rc geninfo_all_blocks=1 00:08:14.765 --rc geninfo_unexecuted_blocks=1 00:08:14.765 00:08:14.765 ' 00:08:14.765 16:27:52 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:14.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.765 --rc genhtml_branch_coverage=1 00:08:14.765 --rc genhtml_function_coverage=1 00:08:14.765 --rc genhtml_legend=1 00:08:14.765 --rc geninfo_all_blocks=1 00:08:14.765 --rc geninfo_unexecuted_blocks=1 00:08:14.765 00:08:14.765 ' 00:08:14.765 16:27:52 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:14.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.765 --rc genhtml_branch_coverage=1 00:08:14.765 --rc genhtml_function_coverage=1 00:08:14.765 --rc genhtml_legend=1 00:08:14.765 --rc geninfo_all_blocks=1 00:08:14.765 --rc geninfo_unexecuted_blocks=1 00:08:14.765 00:08:14.765 ' 00:08:14.765 16:27:52 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:14.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.765 --rc genhtml_branch_coverage=1 00:08:14.765 --rc genhtml_function_coverage=1 00:08:14.765 --rc genhtml_legend=1 00:08:14.765 --rc geninfo_all_blocks=1 00:08:14.765 --rc geninfo_unexecuted_blocks=1 00:08:14.765 00:08:14.765 ' 00:08:14.765 16:27:52 -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:14.765 16:27:52 -- nvmf/common.sh@7 -- # uname -s 00:08:14.765 16:27:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:14.765 16:27:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:14.765 16:27:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:14.765 16:27:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:14.765 16:27:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:14.765 16:27:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:14.765 16:27:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:14.765 16:27:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:14.765 16:27:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:14.765 16:27:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:14.765 16:27:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:08:14.765 16:27:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:08:14.765 16:27:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:14.765 16:27:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:14.765 16:27:52 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:14.765 16:27:52 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:14.765 16:27:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:14.765 16:27:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:14.765 16:27:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:14.765 16:27:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.765 16:27:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.765 16:27:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.765 16:27:52 -- paths/export.sh@5 -- # export PATH 00:08:14.765 16:27:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.765 16:27:52 -- nvmf/common.sh@46 -- # : 0 00:08:14.765 16:27:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:14.765 16:27:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:14.765 16:27:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:14.766 16:27:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:14.766 16:27:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:14.766 16:27:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:14.766 16:27:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:14.766 16:27:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:14.766 16:27:52 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:14.766 16:27:52 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:14.766 16:27:52 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:14.766 16:27:52 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:14.766 16:27:52 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:14.766 16:27:52 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:14.766 16:27:52 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:14.766 16:27:52 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:14.766 16:27:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:14.766 16:27:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:14.766 16:27:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:14.766 16:27:52 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:14.766 16:27:52 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:14.766 16:27:52 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:14.766 16:27:52 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:14.766 16:27:52 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:14.766 16:27:52 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:14.766 16:27:52 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:14.766 16:27:52 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:14.766 16:27:52 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:14.766 16:27:52 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:14.766 16:27:52 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:14.766 16:27:52 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:14.766 16:27:52 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:14.766 16:27:52 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:14.766 16:27:52 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:14.766 16:27:52 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:14.766 16:27:52 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:14.766 16:27:52 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:14.766 16:27:52 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:14.766 16:27:52 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:14.766 Cannot find device "nvmf_tgt_br" 00:08:14.766 16:27:52 -- nvmf/common.sh@154 -- # true 00:08:14.766 16:27:52 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:14.766 Cannot find device "nvmf_tgt_br2" 00:08:14.766 16:27:52 -- nvmf/common.sh@155 -- # true 00:08:14.766 16:27:52 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:14.766 16:27:52 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:14.766 Cannot find device "nvmf_tgt_br" 00:08:14.766 16:27:52 -- nvmf/common.sh@157 -- # true 00:08:14.766 16:27:52 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:14.766 Cannot find device "nvmf_tgt_br2" 00:08:14.766 16:27:52 -- nvmf/common.sh@158 -- # true 00:08:14.766 16:27:52 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:14.766 16:27:52 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:14.766 16:27:52 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:15.025 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:15.025 16:27:52 -- nvmf/common.sh@161 -- # true 00:08:15.025 16:27:52 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:15.025 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:15.025 16:27:52 -- nvmf/common.sh@162 -- # true 00:08:15.025 16:27:52 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:15.025 16:27:52 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:15.025 16:27:52 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:15.025 16:27:52 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:15.025 16:27:52 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:15.025 16:27:52 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:15.025 16:27:52 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:15.025 16:27:52 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:15.025 16:27:52 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:15.025 16:27:52 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:15.025 16:27:52 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:15.025 16:27:52 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:15.025 16:27:52 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:15.025 16:27:52 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:15.025 16:27:52 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:15.025 16:27:52 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:15.025 16:27:52 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:15.025 16:27:52 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:15.025 16:27:52 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:15.025 16:27:52 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:15.025 16:27:52 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:15.025 16:27:52 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:15.025 16:27:52 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:15.025 16:27:52 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:15.025 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:15.025 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:08:15.025 00:08:15.025 --- 10.0.0.2 ping statistics --- 00:08:15.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.025 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:08:15.025 16:27:52 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:15.025 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:15.025 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:08:15.025 00:08:15.025 --- 10.0.0.3 ping statistics --- 00:08:15.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.025 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:08:15.025 16:27:52 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:15.025 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:15.025 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:08:15.025 00:08:15.025 --- 10.0.0.1 ping statistics --- 00:08:15.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.025 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:08:15.025 16:27:52 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:15.025 16:27:52 -- nvmf/common.sh@421 -- # return 0 00:08:15.025 16:27:52 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:15.025 16:27:52 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:15.025 16:27:52 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:15.025 16:27:52 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:15.025 16:27:52 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:15.025 16:27:52 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:15.025 16:27:52 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:15.025 16:27:52 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:15.025 16:27:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:15.025 16:27:52 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:15.025 16:27:52 -- common/autotest_common.sh@10 -- # set +x 00:08:15.025 16:27:52 -- nvmf/common.sh@469 -- # nvmfpid=74038 00:08:15.025 16:27:52 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:15.025 16:27:52 -- nvmf/common.sh@470 -- # waitforlisten 74038 00:08:15.025 16:27:52 -- common/autotest_common.sh@829 -- # '[' -z 74038 ']' 00:08:15.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.025 16:27:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.025 16:27:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:15.025 16:27:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.025 16:27:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:15.025 16:27:52 -- common/autotest_common.sh@10 -- # set +x 00:08:15.284 [2024-11-16 16:27:52.547224] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:15.284 [2024-11-16 16:27:52.547473] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:15.284 [2024-11-16 16:27:52.683655] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:15.284 [2024-11-16 16:27:52.761619] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:15.284 [2024-11-16 16:27:52.762114] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:15.284 [2024-11-16 16:27:52.762248] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:15.284 [2024-11-16 16:27:52.762392] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:15.284 [2024-11-16 16:27:52.762821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:15.284 [2024-11-16 16:27:52.763012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:15.284 [2024-11-16 16:27:52.763128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:15.284 [2024-11-16 16:27:52.763130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.219 16:27:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:16.219 16:27:53 -- common/autotest_common.sh@862 -- # return 0 00:08:16.219 16:27:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:16.219 16:27:53 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:16.219 16:27:53 -- common/autotest_common.sh@10 -- # set +x 00:08:16.219 16:27:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:16.219 16:27:53 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:16.219 16:27:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.220 16:27:53 -- common/autotest_common.sh@10 -- # set +x 00:08:16.220 [2024-11-16 16:27:53.601876] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:16.220 16:27:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.220 16:27:53 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:16.220 16:27:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.220 16:27:53 -- common/autotest_common.sh@10 -- # set +x 00:08:16.220 16:27:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.220 16:27:53 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:16.220 16:27:53 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:16.220 16:27:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.220 16:27:53 -- common/autotest_common.sh@10 -- # set +x 00:08:16.220 16:27:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.220 16:27:53 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:16.220 16:27:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.220 16:27:53 -- common/autotest_common.sh@10 -- # set +x 00:08:16.220 16:27:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.220 16:27:53 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:16.220 16:27:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.220 16:27:53 -- common/autotest_common.sh@10 -- # set +x 00:08:16.220 [2024-11-16 16:27:53.687177] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:16.220 16:27:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.220 16:27:53 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:16.220 16:27:53 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:16.220 16:27:53 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:16.220 16:27:53 -- target/connect_disconnect.sh@34 -- # set +x 00:08:18.752 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:21.285 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:23.258 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:25.791 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:27.692 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:30.224 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:32.125 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:34.658 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:36.562 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:39.094 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:41.639 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:43.546 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:46.073 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:47.976 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:50.508 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:53.044 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:54.946 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:57.480 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:59.384 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:01.917 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:03.860 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:06.394 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:08.297 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:10.830 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:12.735 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:15.267 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:17.172 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:19.708 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:22.242 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:24.146 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:26.679 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:28.584 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:31.117 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:33.017 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:35.550 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.455 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:39.990 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:41.895 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.514 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:47.045 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.949 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:51.482 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:53.384 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.919 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:57.825 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:00.360 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.264 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:04.796 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.699 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:09.233 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:11.776 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.678 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:16.210 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.747 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.652 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:23.185 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:25.195 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.728 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.632 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:32.165 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.068 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.601 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.504 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.037 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:43.572 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.476 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:48.013 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.917 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:52.449 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.351 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:56.884 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:58.786 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.316 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:03.218 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.769 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.303 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.227 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.760 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.663 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.195 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:19.101 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:21.634 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:23.535 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:26.068 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.971 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.505 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.418 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.951 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.856 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.391 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.921 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.831 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.394 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.298 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.832 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:52.737 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.270 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.175 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.710 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.243 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.243 16:31:39 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:02.243 16:31:39 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:02.243 16:31:39 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:02.243 16:31:39 -- nvmf/common.sh@116 -- # sync 00:12:02.243 16:31:39 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:02.243 16:31:39 -- nvmf/common.sh@119 -- # set +e 00:12:02.243 16:31:39 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:02.243 16:31:39 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:02.243 rmmod nvme_tcp 00:12:02.243 rmmod nvme_fabrics 00:12:02.243 rmmod nvme_keyring 00:12:02.243 16:31:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:02.243 16:31:39 -- nvmf/common.sh@123 -- # set -e 00:12:02.243 16:31:39 -- nvmf/common.sh@124 -- # return 0 00:12:02.243 16:31:39 -- nvmf/common.sh@477 -- # '[' -n 74038 ']' 00:12:02.243 16:31:39 -- nvmf/common.sh@478 -- # killprocess 74038 00:12:02.243 16:31:39 -- common/autotest_common.sh@936 -- # '[' -z 74038 ']' 00:12:02.243 16:31:39 -- common/autotest_common.sh@940 -- # kill -0 74038 00:12:02.243 16:31:39 -- common/autotest_common.sh@941 -- # uname 00:12:02.243 16:31:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:02.243 16:31:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74038 00:12:02.243 16:31:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:02.243 16:31:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:02.243 killing process with pid 74038 00:12:02.243 16:31:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74038' 00:12:02.243 16:31:39 -- common/autotest_common.sh@955 -- # kill 74038 00:12:02.243 16:31:39 -- common/autotest_common.sh@960 -- # wait 74038 00:12:02.243 16:31:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:02.243 16:31:39 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:02.243 16:31:39 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:02.243 16:31:39 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:02.243 16:31:39 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:02.243 16:31:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:02.243 16:31:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:02.243 16:31:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:02.243 16:31:39 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:02.243 00:12:02.243 real 3m47.740s 00:12:02.243 user 14m51.169s 00:12:02.243 sys 0m18.604s 00:12:02.243 16:31:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:02.243 ************************************ 00:12:02.243 END TEST nvmf_connect_disconnect 00:12:02.243 ************************************ 00:12:02.243 16:31:39 -- common/autotest_common.sh@10 -- # set +x 00:12:02.243 16:31:39 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:02.243 16:31:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:02.243 16:31:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:02.243 16:31:39 -- common/autotest_common.sh@10 -- # set +x 00:12:02.243 ************************************ 00:12:02.243 START TEST nvmf_multitarget 00:12:02.243 ************************************ 00:12:02.243 16:31:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:02.503 * Looking for test storage... 00:12:02.503 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:02.503 16:31:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:02.503 16:31:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:02.503 16:31:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:02.503 16:31:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:02.503 16:31:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:02.503 16:31:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:02.503 16:31:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:02.503 16:31:39 -- scripts/common.sh@335 -- # IFS=.-: 00:12:02.503 16:31:39 -- scripts/common.sh@335 -- # read -ra ver1 00:12:02.503 16:31:39 -- scripts/common.sh@336 -- # IFS=.-: 00:12:02.503 16:31:39 -- scripts/common.sh@336 -- # read -ra ver2 00:12:02.503 16:31:39 -- scripts/common.sh@337 -- # local 'op=<' 00:12:02.503 16:31:39 -- scripts/common.sh@339 -- # ver1_l=2 00:12:02.503 16:31:39 -- scripts/common.sh@340 -- # ver2_l=1 00:12:02.503 16:31:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:02.503 16:31:39 -- scripts/common.sh@343 -- # case "$op" in 00:12:02.503 16:31:39 -- scripts/common.sh@344 -- # : 1 00:12:02.503 16:31:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:02.503 16:31:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:02.503 16:31:39 -- scripts/common.sh@364 -- # decimal 1 00:12:02.503 16:31:39 -- scripts/common.sh@352 -- # local d=1 00:12:02.503 16:31:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:02.503 16:31:39 -- scripts/common.sh@354 -- # echo 1 00:12:02.503 16:31:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:02.503 16:31:39 -- scripts/common.sh@365 -- # decimal 2 00:12:02.503 16:31:39 -- scripts/common.sh@352 -- # local d=2 00:12:02.503 16:31:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:02.503 16:31:39 -- scripts/common.sh@354 -- # echo 2 00:12:02.503 16:31:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:02.503 16:31:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:02.503 16:31:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:02.503 16:31:39 -- scripts/common.sh@367 -- # return 0 00:12:02.503 16:31:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:02.503 16:31:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:02.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.503 --rc genhtml_branch_coverage=1 00:12:02.503 --rc genhtml_function_coverage=1 00:12:02.503 --rc genhtml_legend=1 00:12:02.503 --rc geninfo_all_blocks=1 00:12:02.503 --rc geninfo_unexecuted_blocks=1 00:12:02.503 00:12:02.503 ' 00:12:02.503 16:31:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:02.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.503 --rc genhtml_branch_coverage=1 00:12:02.503 --rc genhtml_function_coverage=1 00:12:02.503 --rc genhtml_legend=1 00:12:02.503 --rc geninfo_all_blocks=1 00:12:02.503 --rc geninfo_unexecuted_blocks=1 00:12:02.503 00:12:02.503 ' 00:12:02.503 16:31:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:02.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.503 --rc genhtml_branch_coverage=1 00:12:02.503 --rc genhtml_function_coverage=1 00:12:02.503 --rc genhtml_legend=1 00:12:02.503 --rc geninfo_all_blocks=1 00:12:02.503 --rc geninfo_unexecuted_blocks=1 00:12:02.503 00:12:02.503 ' 00:12:02.503 16:31:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:02.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.503 --rc genhtml_branch_coverage=1 00:12:02.503 --rc genhtml_function_coverage=1 00:12:02.503 --rc genhtml_legend=1 00:12:02.503 --rc geninfo_all_blocks=1 00:12:02.503 --rc geninfo_unexecuted_blocks=1 00:12:02.503 00:12:02.503 ' 00:12:02.503 16:31:39 -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:02.503 16:31:39 -- nvmf/common.sh@7 -- # uname -s 00:12:02.503 16:31:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:02.503 16:31:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:02.503 16:31:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:02.503 16:31:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:02.503 16:31:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:02.503 16:31:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:02.503 16:31:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:02.503 16:31:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:02.503 16:31:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:02.503 16:31:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:02.503 16:31:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:12:02.503 16:31:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:12:02.503 16:31:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:02.503 16:31:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:02.503 16:31:39 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:02.503 16:31:39 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:02.503 16:31:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:02.503 16:31:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:02.503 16:31:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:02.503 16:31:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.503 16:31:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.503 16:31:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.503 16:31:39 -- paths/export.sh@5 -- # export PATH 00:12:02.503 16:31:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.503 16:31:39 -- nvmf/common.sh@46 -- # : 0 00:12:02.503 16:31:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:02.503 16:31:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:02.503 16:31:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:02.503 16:31:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:02.503 16:31:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:02.503 16:31:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:02.503 16:31:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:02.503 16:31:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:02.503 16:31:39 -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:12:02.503 16:31:39 -- target/multitarget.sh@15 -- # nvmftestinit 00:12:02.503 16:31:39 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:02.503 16:31:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:02.503 16:31:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:02.503 16:31:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:02.503 16:31:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:02.503 16:31:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:02.503 16:31:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:02.503 16:31:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:02.503 16:31:39 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:02.503 16:31:39 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:02.503 16:31:39 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:02.503 16:31:39 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:02.503 16:31:39 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:02.503 16:31:39 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:02.503 16:31:39 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:02.503 16:31:39 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:02.503 16:31:39 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:02.503 16:31:39 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:02.503 16:31:39 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:02.503 16:31:39 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:02.503 16:31:39 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:02.503 16:31:39 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:02.503 16:31:39 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:02.503 16:31:39 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:02.503 16:31:39 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:02.503 16:31:39 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:02.503 16:31:39 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:02.503 16:31:39 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:02.503 Cannot find device "nvmf_tgt_br" 00:12:02.503 16:31:39 -- nvmf/common.sh@154 -- # true 00:12:02.503 16:31:39 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:02.503 Cannot find device "nvmf_tgt_br2" 00:12:02.503 16:31:39 -- nvmf/common.sh@155 -- # true 00:12:02.503 16:31:39 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:02.503 16:31:39 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:02.503 Cannot find device "nvmf_tgt_br" 00:12:02.503 16:31:39 -- nvmf/common.sh@157 -- # true 00:12:02.503 16:31:39 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:02.762 Cannot find device "nvmf_tgt_br2" 00:12:02.762 16:31:40 -- nvmf/common.sh@158 -- # true 00:12:02.762 16:31:40 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:02.762 16:31:40 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:02.762 16:31:40 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:02.762 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:02.762 16:31:40 -- nvmf/common.sh@161 -- # true 00:12:02.762 16:31:40 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:02.762 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:02.762 16:31:40 -- nvmf/common.sh@162 -- # true 00:12:02.762 16:31:40 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:02.762 16:31:40 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:02.762 16:31:40 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:02.762 16:31:40 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:02.762 16:31:40 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:02.762 16:31:40 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:02.762 16:31:40 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:02.762 16:31:40 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:02.762 16:31:40 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:02.762 16:31:40 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:02.762 16:31:40 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:02.762 16:31:40 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:02.762 16:31:40 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:02.762 16:31:40 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:02.762 16:31:40 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:02.762 16:31:40 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:02.762 16:31:40 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:02.762 16:31:40 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:02.762 16:31:40 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:02.762 16:31:40 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:02.762 16:31:40 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:02.762 16:31:40 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:02.762 16:31:40 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:02.762 16:31:40 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:02.762 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:02.762 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:12:02.762 00:12:02.762 --- 10.0.0.2 ping statistics --- 00:12:02.762 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:02.762 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:12:02.762 16:31:40 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:02.762 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:02.762 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:12:02.762 00:12:02.762 --- 10.0.0.3 ping statistics --- 00:12:02.762 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:02.762 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:12:02.762 16:31:40 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:02.762 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:02.762 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:12:02.762 00:12:02.762 --- 10.0.0.1 ping statistics --- 00:12:02.762 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:02.762 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:12:02.762 16:31:40 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:02.762 16:31:40 -- nvmf/common.sh@421 -- # return 0 00:12:02.762 16:31:40 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:02.762 16:31:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:02.762 16:31:40 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:02.762 16:31:40 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:02.762 16:31:40 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:02.762 16:31:40 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:02.762 16:31:40 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:02.762 16:31:40 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:02.762 16:31:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:02.762 16:31:40 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:02.762 16:31:40 -- common/autotest_common.sh@10 -- # set +x 00:12:03.020 16:31:40 -- nvmf/common.sh@469 -- # nvmfpid=77843 00:12:03.020 16:31:40 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:03.020 16:31:40 -- nvmf/common.sh@470 -- # waitforlisten 77843 00:12:03.020 16:31:40 -- common/autotest_common.sh@829 -- # '[' -z 77843 ']' 00:12:03.020 16:31:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:03.020 16:31:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:03.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:03.020 16:31:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:03.020 16:31:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:03.020 16:31:40 -- common/autotest_common.sh@10 -- # set +x 00:12:03.020 [2024-11-16 16:31:40.310933] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:03.020 [2024-11-16 16:31:40.311026] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:03.020 [2024-11-16 16:31:40.453157] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:03.279 [2024-11-16 16:31:40.532160] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:03.279 [2024-11-16 16:31:40.532915] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:03.279 [2024-11-16 16:31:40.533219] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:03.279 [2024-11-16 16:31:40.533495] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:03.279 [2024-11-16 16:31:40.533849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:03.279 [2024-11-16 16:31:40.533922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:03.279 [2024-11-16 16:31:40.534324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:03.279 [2024-11-16 16:31:40.534330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:03.845 16:31:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:03.845 16:31:41 -- common/autotest_common.sh@862 -- # return 0 00:12:03.845 16:31:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:03.845 16:31:41 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:03.845 16:31:41 -- common/autotest_common.sh@10 -- # set +x 00:12:04.103 16:31:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:04.103 16:31:41 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:04.103 16:31:41 -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:04.103 16:31:41 -- target/multitarget.sh@21 -- # jq length 00:12:04.103 16:31:41 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:04.103 16:31:41 -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:04.362 "nvmf_tgt_1" 00:12:04.362 16:31:41 -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:04.362 "nvmf_tgt_2" 00:12:04.362 16:31:41 -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:04.362 16:31:41 -- target/multitarget.sh@28 -- # jq length 00:12:04.621 16:31:41 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:04.621 16:31:41 -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:04.621 true 00:12:04.621 16:31:42 -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:04.880 true 00:12:04.880 16:31:42 -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:04.880 16:31:42 -- target/multitarget.sh@35 -- # jq length 00:12:04.880 16:31:42 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:04.880 16:31:42 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:04.880 16:31:42 -- target/multitarget.sh@41 -- # nvmftestfini 00:12:04.880 16:31:42 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:04.880 16:31:42 -- nvmf/common.sh@116 -- # sync 00:12:04.880 16:31:42 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:04.880 16:31:42 -- nvmf/common.sh@119 -- # set +e 00:12:04.880 16:31:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:04.880 16:31:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:05.139 rmmod nvme_tcp 00:12:05.139 rmmod nvme_fabrics 00:12:05.139 rmmod nvme_keyring 00:12:05.139 16:31:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:05.139 16:31:42 -- nvmf/common.sh@123 -- # set -e 00:12:05.139 16:31:42 -- nvmf/common.sh@124 -- # return 0 00:12:05.139 16:31:42 -- nvmf/common.sh@477 -- # '[' -n 77843 ']' 00:12:05.139 16:31:42 -- nvmf/common.sh@478 -- # killprocess 77843 00:12:05.139 16:31:42 -- common/autotest_common.sh@936 -- # '[' -z 77843 ']' 00:12:05.139 16:31:42 -- common/autotest_common.sh@940 -- # kill -0 77843 00:12:05.139 16:31:42 -- common/autotest_common.sh@941 -- # uname 00:12:05.139 16:31:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:05.139 16:31:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77843 00:12:05.139 killing process with pid 77843 00:12:05.139 16:31:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:05.139 16:31:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:05.139 16:31:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77843' 00:12:05.139 16:31:42 -- common/autotest_common.sh@955 -- # kill 77843 00:12:05.139 16:31:42 -- common/autotest_common.sh@960 -- # wait 77843 00:12:05.397 16:31:42 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:05.398 16:31:42 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:05.398 16:31:42 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:05.398 16:31:42 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:05.398 16:31:42 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:05.398 16:31:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:05.398 16:31:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:05.398 16:31:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:05.398 16:31:42 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:05.398 00:12:05.398 real 0m3.050s 00:12:05.398 user 0m10.027s 00:12:05.398 sys 0m0.743s 00:12:05.398 16:31:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:05.398 16:31:42 -- common/autotest_common.sh@10 -- # set +x 00:12:05.398 ************************************ 00:12:05.398 END TEST nvmf_multitarget 00:12:05.398 ************************************ 00:12:05.398 16:31:42 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:05.398 16:31:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:05.398 16:31:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:05.398 16:31:42 -- common/autotest_common.sh@10 -- # set +x 00:12:05.398 ************************************ 00:12:05.398 START TEST nvmf_rpc 00:12:05.398 ************************************ 00:12:05.398 16:31:42 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:05.657 * Looking for test storage... 00:12:05.657 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:05.657 16:31:42 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:05.657 16:31:42 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:05.657 16:31:42 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:05.657 16:31:42 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:05.657 16:31:42 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:05.657 16:31:42 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:05.657 16:31:42 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:05.657 16:31:42 -- scripts/common.sh@335 -- # IFS=.-: 00:12:05.657 16:31:42 -- scripts/common.sh@335 -- # read -ra ver1 00:12:05.657 16:31:42 -- scripts/common.sh@336 -- # IFS=.-: 00:12:05.657 16:31:42 -- scripts/common.sh@336 -- # read -ra ver2 00:12:05.657 16:31:42 -- scripts/common.sh@337 -- # local 'op=<' 00:12:05.657 16:31:42 -- scripts/common.sh@339 -- # ver1_l=2 00:12:05.657 16:31:42 -- scripts/common.sh@340 -- # ver2_l=1 00:12:05.657 16:31:42 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:05.657 16:31:42 -- scripts/common.sh@343 -- # case "$op" in 00:12:05.657 16:31:42 -- scripts/common.sh@344 -- # : 1 00:12:05.657 16:31:42 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:05.657 16:31:42 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:05.657 16:31:42 -- scripts/common.sh@364 -- # decimal 1 00:12:05.657 16:31:42 -- scripts/common.sh@352 -- # local d=1 00:12:05.657 16:31:42 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:05.657 16:31:42 -- scripts/common.sh@354 -- # echo 1 00:12:05.657 16:31:42 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:05.657 16:31:43 -- scripts/common.sh@365 -- # decimal 2 00:12:05.657 16:31:43 -- scripts/common.sh@352 -- # local d=2 00:12:05.657 16:31:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:05.657 16:31:43 -- scripts/common.sh@354 -- # echo 2 00:12:05.657 16:31:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:05.657 16:31:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:05.657 16:31:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:05.657 16:31:43 -- scripts/common.sh@367 -- # return 0 00:12:05.657 16:31:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:05.657 16:31:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:05.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.657 --rc genhtml_branch_coverage=1 00:12:05.657 --rc genhtml_function_coverage=1 00:12:05.657 --rc genhtml_legend=1 00:12:05.657 --rc geninfo_all_blocks=1 00:12:05.657 --rc geninfo_unexecuted_blocks=1 00:12:05.657 00:12:05.657 ' 00:12:05.657 16:31:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:05.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.657 --rc genhtml_branch_coverage=1 00:12:05.657 --rc genhtml_function_coverage=1 00:12:05.657 --rc genhtml_legend=1 00:12:05.657 --rc geninfo_all_blocks=1 00:12:05.657 --rc geninfo_unexecuted_blocks=1 00:12:05.657 00:12:05.657 ' 00:12:05.657 16:31:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:05.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.657 --rc genhtml_branch_coverage=1 00:12:05.657 --rc genhtml_function_coverage=1 00:12:05.657 --rc genhtml_legend=1 00:12:05.657 --rc geninfo_all_blocks=1 00:12:05.657 --rc geninfo_unexecuted_blocks=1 00:12:05.657 00:12:05.657 ' 00:12:05.657 16:31:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:05.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.657 --rc genhtml_branch_coverage=1 00:12:05.657 --rc genhtml_function_coverage=1 00:12:05.657 --rc genhtml_legend=1 00:12:05.657 --rc geninfo_all_blocks=1 00:12:05.657 --rc geninfo_unexecuted_blocks=1 00:12:05.657 00:12:05.657 ' 00:12:05.657 16:31:43 -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:05.657 16:31:43 -- nvmf/common.sh@7 -- # uname -s 00:12:05.657 16:31:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:05.657 16:31:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:05.657 16:31:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:05.657 16:31:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:05.657 16:31:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:05.657 16:31:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:05.657 16:31:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:05.657 16:31:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:05.657 16:31:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:05.657 16:31:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:05.657 16:31:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:12:05.657 16:31:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:12:05.657 16:31:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:05.657 16:31:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:05.657 16:31:43 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:05.657 16:31:43 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:05.657 16:31:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:05.657 16:31:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:05.657 16:31:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:05.657 16:31:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.657 16:31:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.657 16:31:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.657 16:31:43 -- paths/export.sh@5 -- # export PATH 00:12:05.657 16:31:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.657 16:31:43 -- nvmf/common.sh@46 -- # : 0 00:12:05.657 16:31:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:05.658 16:31:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:05.658 16:31:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:05.658 16:31:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:05.658 16:31:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:05.658 16:31:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:05.658 16:31:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:05.658 16:31:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:05.658 16:31:43 -- target/rpc.sh@11 -- # loops=5 00:12:05.658 16:31:43 -- target/rpc.sh@23 -- # nvmftestinit 00:12:05.658 16:31:43 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:05.658 16:31:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:05.658 16:31:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:05.658 16:31:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:05.658 16:31:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:05.658 16:31:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:05.658 16:31:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:05.658 16:31:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:05.658 16:31:43 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:05.658 16:31:43 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:05.658 16:31:43 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:05.658 16:31:43 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:05.658 16:31:43 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:05.658 16:31:43 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:05.658 16:31:43 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:05.658 16:31:43 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:05.658 16:31:43 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:05.658 16:31:43 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:05.658 16:31:43 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:05.658 16:31:43 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:05.658 16:31:43 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:05.658 16:31:43 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:05.658 16:31:43 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:05.658 16:31:43 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:05.658 16:31:43 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:05.658 16:31:43 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:05.658 16:31:43 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:05.658 16:31:43 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:05.658 Cannot find device "nvmf_tgt_br" 00:12:05.658 16:31:43 -- nvmf/common.sh@154 -- # true 00:12:05.658 16:31:43 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:05.658 Cannot find device "nvmf_tgt_br2" 00:12:05.658 16:31:43 -- nvmf/common.sh@155 -- # true 00:12:05.658 16:31:43 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:05.658 16:31:43 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:05.658 Cannot find device "nvmf_tgt_br" 00:12:05.658 16:31:43 -- nvmf/common.sh@157 -- # true 00:12:05.658 16:31:43 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:05.658 Cannot find device "nvmf_tgt_br2" 00:12:05.658 16:31:43 -- nvmf/common.sh@158 -- # true 00:12:05.658 16:31:43 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:05.916 16:31:43 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:05.916 16:31:43 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:05.916 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:05.916 16:31:43 -- nvmf/common.sh@161 -- # true 00:12:05.916 16:31:43 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:05.916 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:05.916 16:31:43 -- nvmf/common.sh@162 -- # true 00:12:05.916 16:31:43 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:05.916 16:31:43 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:05.916 16:31:43 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:05.916 16:31:43 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:05.916 16:31:43 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:05.916 16:31:43 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:05.916 16:31:43 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:05.916 16:31:43 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:05.916 16:31:43 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:05.916 16:31:43 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:05.916 16:31:43 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:05.916 16:31:43 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:05.916 16:31:43 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:05.916 16:31:43 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:05.916 16:31:43 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:05.916 16:31:43 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:05.916 16:31:43 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:05.916 16:31:43 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:05.917 16:31:43 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:05.917 16:31:43 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:05.917 16:31:43 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:05.917 16:31:43 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:05.917 16:31:43 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:05.917 16:31:43 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:05.917 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:05.917 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:12:05.917 00:12:05.917 --- 10.0.0.2 ping statistics --- 00:12:05.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:05.917 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:12:05.917 16:31:43 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:05.917 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:05.917 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:12:05.917 00:12:05.917 --- 10.0.0.3 ping statistics --- 00:12:05.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:05.917 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:12:05.917 16:31:43 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:05.917 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:05.917 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:12:05.917 00:12:05.917 --- 10.0.0.1 ping statistics --- 00:12:05.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:05.917 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:12:06.176 16:31:43 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:06.176 16:31:43 -- nvmf/common.sh@421 -- # return 0 00:12:06.176 16:31:43 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:06.176 16:31:43 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:06.176 16:31:43 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:06.176 16:31:43 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:06.176 16:31:43 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:06.176 16:31:43 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:06.176 16:31:43 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:06.176 16:31:43 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:06.176 16:31:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:06.176 16:31:43 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:06.176 16:31:43 -- common/autotest_common.sh@10 -- # set +x 00:12:06.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:06.176 16:31:43 -- nvmf/common.sh@469 -- # nvmfpid=78079 00:12:06.176 16:31:43 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:06.176 16:31:43 -- nvmf/common.sh@470 -- # waitforlisten 78079 00:12:06.176 16:31:43 -- common/autotest_common.sh@829 -- # '[' -z 78079 ']' 00:12:06.176 16:31:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:06.176 16:31:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:06.176 16:31:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:06.176 16:31:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:06.176 16:31:43 -- common/autotest_common.sh@10 -- # set +x 00:12:06.176 [2024-11-16 16:31:43.486367] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:06.176 [2024-11-16 16:31:43.486461] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:06.176 [2024-11-16 16:31:43.629547] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:06.435 [2024-11-16 16:31:43.706187] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:06.435 [2024-11-16 16:31:43.706348] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:06.435 [2024-11-16 16:31:43.706363] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:06.435 [2024-11-16 16:31:43.706372] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:06.435 [2024-11-16 16:31:43.706525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:06.435 [2024-11-16 16:31:43.707410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:06.435 [2024-11-16 16:31:43.707595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:06.435 [2024-11-16 16:31:43.707604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:07.372 16:31:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:07.372 16:31:44 -- common/autotest_common.sh@862 -- # return 0 00:12:07.372 16:31:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:07.372 16:31:44 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:07.372 16:31:44 -- common/autotest_common.sh@10 -- # set +x 00:12:07.372 16:31:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:07.372 16:31:44 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:07.372 16:31:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.372 16:31:44 -- common/autotest_common.sh@10 -- # set +x 00:12:07.372 16:31:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.372 16:31:44 -- target/rpc.sh@26 -- # stats='{ 00:12:07.372 "poll_groups": [ 00:12:07.372 { 00:12:07.372 "admin_qpairs": 0, 00:12:07.372 "completed_nvme_io": 0, 00:12:07.372 "current_admin_qpairs": 0, 00:12:07.372 "current_io_qpairs": 0, 00:12:07.372 "io_qpairs": 0, 00:12:07.372 "name": "nvmf_tgt_poll_group_0", 00:12:07.372 "pending_bdev_io": 0, 00:12:07.372 "transports": [] 00:12:07.372 }, 00:12:07.372 { 00:12:07.372 "admin_qpairs": 0, 00:12:07.372 "completed_nvme_io": 0, 00:12:07.372 "current_admin_qpairs": 0, 00:12:07.372 "current_io_qpairs": 0, 00:12:07.372 "io_qpairs": 0, 00:12:07.372 "name": "nvmf_tgt_poll_group_1", 00:12:07.372 "pending_bdev_io": 0, 00:12:07.372 "transports": [] 00:12:07.372 }, 00:12:07.372 { 00:12:07.372 "admin_qpairs": 0, 00:12:07.372 "completed_nvme_io": 0, 00:12:07.372 "current_admin_qpairs": 0, 00:12:07.372 "current_io_qpairs": 0, 00:12:07.372 "io_qpairs": 0, 00:12:07.372 "name": "nvmf_tgt_poll_group_2", 00:12:07.372 "pending_bdev_io": 0, 00:12:07.372 "transports": [] 00:12:07.372 }, 00:12:07.372 { 00:12:07.372 "admin_qpairs": 0, 00:12:07.372 "completed_nvme_io": 0, 00:12:07.372 "current_admin_qpairs": 0, 00:12:07.372 "current_io_qpairs": 0, 00:12:07.372 "io_qpairs": 0, 00:12:07.372 "name": "nvmf_tgt_poll_group_3", 00:12:07.372 "pending_bdev_io": 0, 00:12:07.372 "transports": [] 00:12:07.372 } 00:12:07.372 ], 00:12:07.372 "tick_rate": 2200000000 00:12:07.372 }' 00:12:07.372 16:31:44 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:07.372 16:31:44 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:07.372 16:31:44 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:07.372 16:31:44 -- target/rpc.sh@15 -- # wc -l 00:12:07.372 16:31:44 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:07.372 16:31:44 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:07.372 16:31:44 -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:07.372 16:31:44 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:07.372 16:31:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.372 16:31:44 -- common/autotest_common.sh@10 -- # set +x 00:12:07.372 [2024-11-16 16:31:44.680955] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:07.372 16:31:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.372 16:31:44 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:07.372 16:31:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.372 16:31:44 -- common/autotest_common.sh@10 -- # set +x 00:12:07.372 16:31:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.372 16:31:44 -- target/rpc.sh@33 -- # stats='{ 00:12:07.372 "poll_groups": [ 00:12:07.372 { 00:12:07.372 "admin_qpairs": 0, 00:12:07.372 "completed_nvme_io": 0, 00:12:07.372 "current_admin_qpairs": 0, 00:12:07.372 "current_io_qpairs": 0, 00:12:07.372 "io_qpairs": 0, 00:12:07.372 "name": "nvmf_tgt_poll_group_0", 00:12:07.372 "pending_bdev_io": 0, 00:12:07.372 "transports": [ 00:12:07.372 { 00:12:07.372 "trtype": "TCP" 00:12:07.372 } 00:12:07.372 ] 00:12:07.372 }, 00:12:07.372 { 00:12:07.372 "admin_qpairs": 0, 00:12:07.372 "completed_nvme_io": 0, 00:12:07.372 "current_admin_qpairs": 0, 00:12:07.372 "current_io_qpairs": 0, 00:12:07.372 "io_qpairs": 0, 00:12:07.372 "name": "nvmf_tgt_poll_group_1", 00:12:07.372 "pending_bdev_io": 0, 00:12:07.372 "transports": [ 00:12:07.372 { 00:12:07.372 "trtype": "TCP" 00:12:07.372 } 00:12:07.372 ] 00:12:07.372 }, 00:12:07.372 { 00:12:07.372 "admin_qpairs": 0, 00:12:07.372 "completed_nvme_io": 0, 00:12:07.372 "current_admin_qpairs": 0, 00:12:07.372 "current_io_qpairs": 0, 00:12:07.372 "io_qpairs": 0, 00:12:07.372 "name": "nvmf_tgt_poll_group_2", 00:12:07.372 "pending_bdev_io": 0, 00:12:07.372 "transports": [ 00:12:07.372 { 00:12:07.372 "trtype": "TCP" 00:12:07.372 } 00:12:07.372 ] 00:12:07.372 }, 00:12:07.372 { 00:12:07.373 "admin_qpairs": 0, 00:12:07.373 "completed_nvme_io": 0, 00:12:07.373 "current_admin_qpairs": 0, 00:12:07.373 "current_io_qpairs": 0, 00:12:07.373 "io_qpairs": 0, 00:12:07.373 "name": "nvmf_tgt_poll_group_3", 00:12:07.373 "pending_bdev_io": 0, 00:12:07.373 "transports": [ 00:12:07.373 { 00:12:07.373 "trtype": "TCP" 00:12:07.373 } 00:12:07.373 ] 00:12:07.373 } 00:12:07.373 ], 00:12:07.373 "tick_rate": 2200000000 00:12:07.373 }' 00:12:07.373 16:31:44 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:07.373 16:31:44 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:07.373 16:31:44 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:07.373 16:31:44 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:07.373 16:31:44 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:07.373 16:31:44 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:07.373 16:31:44 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:07.373 16:31:44 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:07.373 16:31:44 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:07.373 16:31:44 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:07.373 16:31:44 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:07.373 16:31:44 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:07.373 16:31:44 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:07.373 16:31:44 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:07.373 16:31:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.373 16:31:44 -- common/autotest_common.sh@10 -- # set +x 00:12:07.373 Malloc1 00:12:07.373 16:31:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.373 16:31:44 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:07.373 16:31:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.373 16:31:44 -- common/autotest_common.sh@10 -- # set +x 00:12:07.632 16:31:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.632 16:31:44 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:07.632 16:31:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.632 16:31:44 -- common/autotest_common.sh@10 -- # set +x 00:12:07.632 16:31:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.632 16:31:44 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:07.632 16:31:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.632 16:31:44 -- common/autotest_common.sh@10 -- # set +x 00:12:07.632 16:31:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.632 16:31:44 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:07.632 16:31:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.632 16:31:44 -- common/autotest_common.sh@10 -- # set +x 00:12:07.632 [2024-11-16 16:31:44.887933] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:07.632 16:31:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.632 16:31:44 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 --hostid=dcaf3c85-349e-474a-91c8-b5dfcb47b007 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 -a 10.0.0.2 -s 4420 00:12:07.632 16:31:44 -- common/autotest_common.sh@650 -- # local es=0 00:12:07.632 16:31:44 -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 --hostid=dcaf3c85-349e-474a-91c8-b5dfcb47b007 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 -a 10.0.0.2 -s 4420 00:12:07.632 16:31:44 -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:07.632 16:31:44 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:07.632 16:31:44 -- common/autotest_common.sh@642 -- # type -t nvme 00:12:07.632 16:31:44 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:07.632 16:31:44 -- common/autotest_common.sh@644 -- # type -P nvme 00:12:07.632 16:31:44 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:07.632 16:31:44 -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:07.632 16:31:44 -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:07.632 16:31:44 -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 --hostid=dcaf3c85-349e-474a-91c8-b5dfcb47b007 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 -a 10.0.0.2 -s 4420 00:12:07.632 [2024-11-16 16:31:44.916184] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007' 00:12:07.632 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:07.632 could not add new controller: failed to write to nvme-fabrics device 00:12:07.632 16:31:44 -- common/autotest_common.sh@653 -- # es=1 00:12:07.632 16:31:44 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:07.632 16:31:44 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:07.632 16:31:44 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:07.632 16:31:44 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:12:07.632 16:31:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.632 16:31:44 -- common/autotest_common.sh@10 -- # set +x 00:12:07.632 16:31:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.632 16:31:44 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 --hostid=dcaf3c85-349e-474a-91c8-b5dfcb47b007 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:07.632 16:31:45 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:07.632 16:31:45 -- common/autotest_common.sh@1187 -- # local i=0 00:12:07.632 16:31:45 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:07.632 16:31:45 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:07.632 16:31:45 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:10.166 16:31:47 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:10.166 16:31:47 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:10.166 16:31:47 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:10.166 16:31:47 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:10.166 16:31:47 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:10.166 16:31:47 -- common/autotest_common.sh@1197 -- # return 0 00:12:10.166 16:31:47 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:10.166 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.166 16:31:47 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:10.166 16:31:47 -- common/autotest_common.sh@1208 -- # local i=0 00:12:10.166 16:31:47 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:10.166 16:31:47 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:10.166 16:31:47 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:10.166 16:31:47 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:10.166 16:31:47 -- common/autotest_common.sh@1220 -- # return 0 00:12:10.166 16:31:47 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:12:10.166 16:31:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.166 16:31:47 -- common/autotest_common.sh@10 -- # set +x 00:12:10.166 16:31:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.166 16:31:47 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 --hostid=dcaf3c85-349e-474a-91c8-b5dfcb47b007 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:10.166 16:31:47 -- common/autotest_common.sh@650 -- # local es=0 00:12:10.166 16:31:47 -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 --hostid=dcaf3c85-349e-474a-91c8-b5dfcb47b007 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:10.166 16:31:47 -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:10.166 16:31:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:10.166 16:31:47 -- common/autotest_common.sh@642 -- # type -t nvme 00:12:10.166 16:31:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:10.166 16:31:47 -- common/autotest_common.sh@644 -- # type -P nvme 00:12:10.166 16:31:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:10.166 16:31:47 -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:10.166 16:31:47 -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:10.166 16:31:47 -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 --hostid=dcaf3c85-349e-474a-91c8-b5dfcb47b007 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:10.166 [2024-11-16 16:31:47.337987] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007' 00:12:10.166 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:10.166 could not add new controller: failed to write to nvme-fabrics device 00:12:10.166 16:31:47 -- common/autotest_common.sh@653 -- # es=1 00:12:10.166 16:31:47 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:10.166 16:31:47 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:10.166 16:31:47 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:10.166 16:31:47 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:10.166 16:31:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.166 16:31:47 -- common/autotest_common.sh@10 -- # set +x 00:12:10.166 16:31:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.166 16:31:47 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 --hostid=dcaf3c85-349e-474a-91c8-b5dfcb47b007 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:10.166 16:31:47 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:10.166 16:31:47 -- common/autotest_common.sh@1187 -- # local i=0 00:12:10.166 16:31:47 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:10.166 16:31:47 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:10.166 16:31:47 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:12.066 16:31:49 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:12.066 16:31:49 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:12.066 16:31:49 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:12.066 16:31:49 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:12.066 16:31:49 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:12.066 16:31:49 -- common/autotest_common.sh@1197 -- # return 0 00:12:12.066 16:31:49 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:12.325 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.325 16:31:49 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:12.325 16:31:49 -- common/autotest_common.sh@1208 -- # local i=0 00:12:12.325 16:31:49 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:12.325 16:31:49 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:12.325 16:31:49 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:12.325 16:31:49 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:12.325 16:31:49 -- common/autotest_common.sh@1220 -- # return 0 00:12:12.325 16:31:49 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:12.325 16:31:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.325 16:31:49 -- common/autotest_common.sh@10 -- # set +x 00:12:12.325 16:31:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.325 16:31:49 -- target/rpc.sh@81 -- # seq 1 5 00:12:12.325 16:31:49 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:12.325 16:31:49 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:12.325 16:31:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.325 16:31:49 -- common/autotest_common.sh@10 -- # set +x 00:12:12.325 16:31:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.325 16:31:49 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:12.325 16:31:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.325 16:31:49 -- common/autotest_common.sh@10 -- # set +x 00:12:12.325 [2024-11-16 16:31:49.652980] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:12.325 16:31:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.325 16:31:49 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:12.325 16:31:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.325 16:31:49 -- common/autotest_common.sh@10 -- # set +x 00:12:12.325 16:31:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.325 16:31:49 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:12.325 16:31:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.325 16:31:49 -- common/autotest_common.sh@10 -- # set +x 00:12:12.325 16:31:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.325 16:31:49 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 --hostid=dcaf3c85-349e-474a-91c8-b5dfcb47b007 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:12.584 16:31:49 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:12.584 16:31:49 -- common/autotest_common.sh@1187 -- # local i=0 00:12:12.584 16:31:49 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:12.584 16:31:49 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:12.584 16:31:49 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:14.486 16:31:51 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:14.486 16:31:51 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:14.486 16:31:51 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:14.486 16:31:51 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:14.486 16:31:51 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:14.486 16:31:51 -- common/autotest_common.sh@1197 -- # return 0 00:12:14.486 16:31:51 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:14.486 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.486 16:31:51 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:14.486 16:31:51 -- common/autotest_common.sh@1208 -- # local i=0 00:12:14.486 16:31:51 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:14.486 16:31:51 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:14.486 16:31:51 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:14.486 16:31:51 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:14.486 16:31:51 -- common/autotest_common.sh@1220 -- # return 0 00:12:14.486 16:31:51 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:14.486 16:31:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.486 16:31:51 -- common/autotest_common.sh@10 -- # set +x 00:12:14.486 16:31:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.486 16:31:51 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:14.486 16:31:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.486 16:31:51 -- common/autotest_common.sh@10 -- # set +x 00:12:14.486 16:31:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.486 16:31:51 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:14.486 16:31:51 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:14.486 16:31:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.486 16:31:51 -- common/autotest_common.sh@10 -- # set +x 00:12:14.486 16:31:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.486 16:31:51 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:14.486 16:31:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.486 16:31:51 -- common/autotest_common.sh@10 -- # set +x 00:12:14.486 [2024-11-16 16:31:51.973828] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:14.749 16:31:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.749 16:31:51 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:14.749 16:31:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.749 16:31:51 -- common/autotest_common.sh@10 -- # set +x 00:12:14.749 16:31:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.749 16:31:51 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:14.749 16:31:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.749 16:31:51 -- common/autotest_common.sh@10 -- # set +x 00:12:14.749 16:31:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.749 16:31:51 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 --hostid=dcaf3c85-349e-474a-91c8-b5dfcb47b007 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:14.749 16:31:52 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:14.749 16:31:52 -- common/autotest_common.sh@1187 -- # local i=0 00:12:14.749 16:31:52 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:14.749 16:31:52 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:14.749 16:31:52 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:17.282 16:31:54 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:17.282 16:31:54 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:17.282 16:31:54 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:17.282 16:31:54 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:17.282 16:31:54 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:17.282 16:31:54 -- common/autotest_common.sh@1197 -- # return 0 00:12:17.282 16:31:54 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:17.282 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.282 16:31:54 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:17.282 16:31:54 -- common/autotest_common.sh@1208 -- # local i=0 00:12:17.282 16:31:54 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:17.282 16:31:54 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:17.282 16:31:54 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:17.282 16:31:54 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:17.282 16:31:54 -- common/autotest_common.sh@1220 -- # return 0 00:12:17.282 16:31:54 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:17.282 16:31:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.282 16:31:54 -- common/autotest_common.sh@10 -- # set +x 00:12:17.282 16:31:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.282 16:31:54 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:17.282 16:31:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.282 16:31:54 -- common/autotest_common.sh@10 -- # set +x 00:12:17.282 16:31:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.282 16:31:54 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:17.282 16:31:54 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:17.282 16:31:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.282 16:31:54 -- common/autotest_common.sh@10 -- # set +x 00:12:17.282 16:31:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.282 16:31:54 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:17.282 16:31:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.282 16:31:54 -- common/autotest_common.sh@10 -- # set +x 00:12:17.282 [2024-11-16 16:31:54.394298] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:17.282 16:31:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.282 16:31:54 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:17.282 16:31:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.282 16:31:54 -- common/autotest_common.sh@10 -- # set +x 00:12:17.282 16:31:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.282 16:31:54 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:17.282 16:31:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.282 16:31:54 -- common/autotest_common.sh@10 -- # set +x 00:12:17.282 16:31:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.282 16:31:54 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 --hostid=dcaf3c85-349e-474a-91c8-b5dfcb47b007 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:17.282 16:31:54 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:17.282 16:31:54 -- common/autotest_common.sh@1187 -- # local i=0 00:12:17.282 16:31:54 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:17.282 16:31:54 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:17.282 16:31:54 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:19.187 16:31:56 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:19.187 16:31:56 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:19.187 16:31:56 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:19.187 16:31:56 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:19.187 16:31:56 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:19.187 16:31:56 -- common/autotest_common.sh@1197 -- # return 0 00:12:19.187 16:31:56 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:19.446 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.446 16:31:56 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:19.446 16:31:56 -- common/autotest_common.sh@1208 -- # local i=0 00:12:19.446 16:31:56 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:19.446 16:31:56 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:19.446 16:31:56 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:19.446 16:31:56 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:19.446 16:31:56 -- common/autotest_common.sh@1220 -- # return 0 00:12:19.446 16:31:56 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:19.446 16:31:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.446 16:31:56 -- common/autotest_common.sh@10 -- # set +x 00:12:19.446 16:31:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.446 16:31:56 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:19.446 16:31:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.446 16:31:56 -- common/autotest_common.sh@10 -- # set +x 00:12:19.446 16:31:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.446 16:31:56 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:19.446 16:31:56 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:19.446 16:31:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.446 16:31:56 -- common/autotest_common.sh@10 -- # set +x 00:12:19.446 16:31:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.446 16:31:56 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:19.446 16:31:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.446 16:31:56 -- common/autotest_common.sh@10 -- # set +x 00:12:19.446 [2024-11-16 16:31:56.823133] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:19.446 16:31:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.446 16:31:56 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:19.446 16:31:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.446 16:31:56 -- common/autotest_common.sh@10 -- # set +x 00:12:19.446 16:31:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.446 16:31:56 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:19.446 16:31:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.446 16:31:56 -- common/autotest_common.sh@10 -- # set +x 00:12:19.446 16:31:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.446 16:31:56 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 --hostid=dcaf3c85-349e-474a-91c8-b5dfcb47b007 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:19.705 16:31:57 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:19.705 16:31:57 -- common/autotest_common.sh@1187 -- # local i=0 00:12:19.705 16:31:57 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:19.705 16:31:57 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:19.705 16:31:57 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:21.670 16:31:59 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:21.670 16:31:59 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:21.670 16:31:59 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:21.670 16:31:59 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:21.670 16:31:59 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:21.670 16:31:59 -- common/autotest_common.sh@1197 -- # return 0 00:12:21.670 16:31:59 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:21.670 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.670 16:31:59 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:21.670 16:31:59 -- common/autotest_common.sh@1208 -- # local i=0 00:12:21.670 16:31:59 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:21.670 16:31:59 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:21.670 16:31:59 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:21.670 16:31:59 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:21.670 16:31:59 -- common/autotest_common.sh@1220 -- # return 0 00:12:21.670 16:31:59 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:21.670 16:31:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.670 16:31:59 -- common/autotest_common.sh@10 -- # set +x 00:12:21.670 16:31:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.670 16:31:59 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:21.670 16:31:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.671 16:31:59 -- common/autotest_common.sh@10 -- # set +x 00:12:21.671 16:31:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.671 16:31:59 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:21.671 16:31:59 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:21.671 16:31:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.671 16:31:59 -- common/autotest_common.sh@10 -- # set +x 00:12:21.671 16:31:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.671 16:31:59 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:21.671 16:31:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.671 16:31:59 -- common/autotest_common.sh@10 -- # set +x 00:12:21.671 [2024-11-16 16:31:59.119272] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:21.671 16:31:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.671 16:31:59 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:21.671 16:31:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.671 16:31:59 -- common/autotest_common.sh@10 -- # set +x 00:12:21.671 16:31:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.671 16:31:59 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:21.671 16:31:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.671 16:31:59 -- common/autotest_common.sh@10 -- # set +x 00:12:21.671 16:31:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.671 16:31:59 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 --hostid=dcaf3c85-349e-474a-91c8-b5dfcb47b007 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:21.944 16:31:59 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:21.944 16:31:59 -- common/autotest_common.sh@1187 -- # local i=0 00:12:21.944 16:31:59 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:21.944 16:31:59 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:21.944 16:31:59 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:23.853 16:32:01 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:23.853 16:32:01 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:23.853 16:32:01 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:23.853 16:32:01 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:23.853 16:32:01 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:23.853 16:32:01 -- common/autotest_common.sh@1197 -- # return 0 00:12:23.853 16:32:01 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:24.113 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.113 16:32:01 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:24.113 16:32:01 -- common/autotest_common.sh@1208 -- # local i=0 00:12:24.113 16:32:01 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:24.113 16:32:01 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:24.113 16:32:01 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:24.113 16:32:01 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:24.113 16:32:01 -- common/autotest_common.sh@1220 -- # return 0 00:12:24.113 16:32:01 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:24.113 16:32:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.113 16:32:01 -- common/autotest_common.sh@10 -- # set +x 00:12:24.113 16:32:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.113 16:32:01 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:24.113 16:32:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.113 16:32:01 -- common/autotest_common.sh@10 -- # set +x 00:12:24.113 16:32:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.113 16:32:01 -- target/rpc.sh@99 -- # seq 1 5 00:12:24.113 16:32:01 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:24.113 16:32:01 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:24.113 16:32:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.113 16:32:01 -- common/autotest_common.sh@10 -- # set +x 00:12:24.113 16:32:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.113 16:32:01 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:24.113 16:32:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.113 16:32:01 -- common/autotest_common.sh@10 -- # set +x 00:12:24.113 [2024-11-16 16:32:01.548573] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:24.113 16:32:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.113 16:32:01 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:24.113 16:32:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.113 16:32:01 -- common/autotest_common.sh@10 -- # set +x 00:12:24.113 16:32:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.113 16:32:01 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:24.113 16:32:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.113 16:32:01 -- common/autotest_common.sh@10 -- # set +x 00:12:24.113 16:32:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.113 16:32:01 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:24.113 16:32:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.113 16:32:01 -- common/autotest_common.sh@10 -- # set +x 00:12:24.113 16:32:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.113 16:32:01 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:24.113 16:32:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.113 16:32:01 -- common/autotest_common.sh@10 -- # set +x 00:12:24.113 16:32:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.113 16:32:01 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:24.113 16:32:01 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:24.113 16:32:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.113 16:32:01 -- common/autotest_common.sh@10 -- # set +x 00:12:24.113 16:32:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.113 16:32:01 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:24.113 16:32:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.113 16:32:01 -- common/autotest_common.sh@10 -- # set +x 00:12:24.114 [2024-11-16 16:32:01.596542] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:24.114 16:32:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.114 16:32:01 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:24.114 16:32:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.373 16:32:01 -- common/autotest_common.sh@10 -- # set +x 00:12:24.373 16:32:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.373 16:32:01 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:24.373 16:32:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.373 16:32:01 -- common/autotest_common.sh@10 -- # set +x 00:12:24.373 16:32:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.373 16:32:01 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:24.373 16:32:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.373 16:32:01 -- common/autotest_common.sh@10 -- # set +x 00:12:24.373 16:32:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.373 16:32:01 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:24.373 16:32:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.373 16:32:01 -- common/autotest_common.sh@10 -- # set +x 00:12:24.373 16:32:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.373 16:32:01 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:24.373 16:32:01 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:24.373 16:32:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.373 16:32:01 -- common/autotest_common.sh@10 -- # set +x 00:12:24.373 16:32:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.373 16:32:01 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:24.373 16:32:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.373 16:32:01 -- common/autotest_common.sh@10 -- # set +x 00:12:24.373 [2024-11-16 16:32:01.648636] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:24.373 16:32:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.373 16:32:01 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:24.373 16:32:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.373 16:32:01 -- common/autotest_common.sh@10 -- # set +x 00:12:24.373 16:32:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.373 16:32:01 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:24.373 16:32:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.373 16:32:01 -- common/autotest_common.sh@10 -- # set +x 00:12:24.373 16:32:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.373 16:32:01 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:24.373 16:32:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.373 16:32:01 -- common/autotest_common.sh@10 -- # set +x 00:12:24.373 16:32:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.373 16:32:01 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:24.373 16:32:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.373 16:32:01 -- common/autotest_common.sh@10 -- # set +x 00:12:24.373 16:32:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.373 16:32:01 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:24.373 16:32:01 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:24.373 16:32:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.373 16:32:01 -- common/autotest_common.sh@10 -- # set +x 00:12:24.373 16:32:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.373 16:32:01 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:24.373 16:32:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.373 16:32:01 -- common/autotest_common.sh@10 -- # set +x 00:12:24.373 [2024-11-16 16:32:01.696752] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:24.373 16:32:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.373 16:32:01 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:24.373 16:32:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.373 16:32:01 -- common/autotest_common.sh@10 -- # set +x 00:12:24.373 16:32:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.373 16:32:01 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:24.373 16:32:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.373 16:32:01 -- common/autotest_common.sh@10 -- # set +x 00:12:24.373 16:32:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.373 16:32:01 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:24.373 16:32:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.373 16:32:01 -- common/autotest_common.sh@10 -- # set +x 00:12:24.373 16:32:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.373 16:32:01 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:24.373 16:32:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.373 16:32:01 -- common/autotest_common.sh@10 -- # set +x 00:12:24.373 16:32:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.373 16:32:01 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:24.373 16:32:01 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:24.373 16:32:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.373 16:32:01 -- common/autotest_common.sh@10 -- # set +x 00:12:24.373 16:32:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.373 16:32:01 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:24.373 16:32:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.373 16:32:01 -- common/autotest_common.sh@10 -- # set +x 00:12:24.373 [2024-11-16 16:32:01.744802] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:24.373 16:32:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.373 16:32:01 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:24.373 16:32:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.373 16:32:01 -- common/autotest_common.sh@10 -- # set +x 00:12:24.373 16:32:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.373 16:32:01 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:24.373 16:32:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.373 16:32:01 -- common/autotest_common.sh@10 -- # set +x 00:12:24.373 16:32:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.373 16:32:01 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:24.373 16:32:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.373 16:32:01 -- common/autotest_common.sh@10 -- # set +x 00:12:24.373 16:32:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.373 16:32:01 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:24.373 16:32:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.373 16:32:01 -- common/autotest_common.sh@10 -- # set +x 00:12:24.373 16:32:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.374 16:32:01 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:24.374 16:32:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.374 16:32:01 -- common/autotest_common.sh@10 -- # set +x 00:12:24.374 16:32:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.374 16:32:01 -- target/rpc.sh@110 -- # stats='{ 00:12:24.374 "poll_groups": [ 00:12:24.374 { 00:12:24.374 "admin_qpairs": 2, 00:12:24.374 "completed_nvme_io": 66, 00:12:24.374 "current_admin_qpairs": 0, 00:12:24.374 "current_io_qpairs": 0, 00:12:24.374 "io_qpairs": 16, 00:12:24.374 "name": "nvmf_tgt_poll_group_0", 00:12:24.374 "pending_bdev_io": 0, 00:12:24.374 "transports": [ 00:12:24.374 { 00:12:24.374 "trtype": "TCP" 00:12:24.374 } 00:12:24.374 ] 00:12:24.374 }, 00:12:24.374 { 00:12:24.374 "admin_qpairs": 3, 00:12:24.374 "completed_nvme_io": 116, 00:12:24.374 "current_admin_qpairs": 0, 00:12:24.374 "current_io_qpairs": 0, 00:12:24.374 "io_qpairs": 17, 00:12:24.374 "name": "nvmf_tgt_poll_group_1", 00:12:24.374 "pending_bdev_io": 0, 00:12:24.374 "transports": [ 00:12:24.374 { 00:12:24.374 "trtype": "TCP" 00:12:24.374 } 00:12:24.374 ] 00:12:24.374 }, 00:12:24.374 { 00:12:24.374 "admin_qpairs": 1, 00:12:24.374 "completed_nvme_io": 167, 00:12:24.374 "current_admin_qpairs": 0, 00:12:24.374 "current_io_qpairs": 0, 00:12:24.374 "io_qpairs": 19, 00:12:24.374 "name": "nvmf_tgt_poll_group_2", 00:12:24.374 "pending_bdev_io": 0, 00:12:24.374 "transports": [ 00:12:24.374 { 00:12:24.374 "trtype": "TCP" 00:12:24.374 } 00:12:24.374 ] 00:12:24.374 }, 00:12:24.374 { 00:12:24.374 "admin_qpairs": 1, 00:12:24.374 "completed_nvme_io": 71, 00:12:24.374 "current_admin_qpairs": 0, 00:12:24.374 "current_io_qpairs": 0, 00:12:24.374 "io_qpairs": 18, 00:12:24.374 "name": "nvmf_tgt_poll_group_3", 00:12:24.374 "pending_bdev_io": 0, 00:12:24.374 "transports": [ 00:12:24.374 { 00:12:24.374 "trtype": "TCP" 00:12:24.374 } 00:12:24.374 ] 00:12:24.374 } 00:12:24.374 ], 00:12:24.374 "tick_rate": 2200000000 00:12:24.374 }' 00:12:24.374 16:32:01 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:24.374 16:32:01 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:24.374 16:32:01 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:24.374 16:32:01 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:24.632 16:32:01 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:24.632 16:32:01 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:24.632 16:32:01 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:24.632 16:32:01 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:24.632 16:32:01 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:24.632 16:32:01 -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:12:24.632 16:32:01 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:24.632 16:32:01 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:24.632 16:32:01 -- target/rpc.sh@123 -- # nvmftestfini 00:12:24.632 16:32:01 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:24.632 16:32:01 -- nvmf/common.sh@116 -- # sync 00:12:24.633 16:32:01 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:24.633 16:32:01 -- nvmf/common.sh@119 -- # set +e 00:12:24.633 16:32:01 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:24.633 16:32:01 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:24.633 rmmod nvme_tcp 00:12:24.633 rmmod nvme_fabrics 00:12:24.633 rmmod nvme_keyring 00:12:24.633 16:32:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:24.633 16:32:02 -- nvmf/common.sh@123 -- # set -e 00:12:24.633 16:32:02 -- nvmf/common.sh@124 -- # return 0 00:12:24.633 16:32:02 -- nvmf/common.sh@477 -- # '[' -n 78079 ']' 00:12:24.633 16:32:02 -- nvmf/common.sh@478 -- # killprocess 78079 00:12:24.633 16:32:02 -- common/autotest_common.sh@936 -- # '[' -z 78079 ']' 00:12:24.633 16:32:02 -- common/autotest_common.sh@940 -- # kill -0 78079 00:12:24.633 16:32:02 -- common/autotest_common.sh@941 -- # uname 00:12:24.633 16:32:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:24.633 16:32:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78079 00:12:24.633 16:32:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:24.633 16:32:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:24.633 killing process with pid 78079 00:12:24.633 16:32:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78079' 00:12:24.633 16:32:02 -- common/autotest_common.sh@955 -- # kill 78079 00:12:24.633 16:32:02 -- common/autotest_common.sh@960 -- # wait 78079 00:12:24.892 16:32:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:24.892 16:32:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:24.892 16:32:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:24.892 16:32:02 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:24.892 16:32:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:24.892 16:32:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:24.892 16:32:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:24.892 16:32:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:24.892 16:32:02 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:24.892 00:12:24.892 real 0m19.527s 00:12:24.892 user 1m13.840s 00:12:24.892 sys 0m2.155s 00:12:24.892 16:32:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:24.892 ************************************ 00:12:24.892 END TEST nvmf_rpc 00:12:24.892 16:32:02 -- common/autotest_common.sh@10 -- # set +x 00:12:24.892 ************************************ 00:12:25.152 16:32:02 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:25.152 16:32:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:25.152 16:32:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:25.152 16:32:02 -- common/autotest_common.sh@10 -- # set +x 00:12:25.152 ************************************ 00:12:25.152 START TEST nvmf_invalid 00:12:25.152 ************************************ 00:12:25.152 16:32:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:25.152 * Looking for test storage... 00:12:25.152 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:25.152 16:32:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:25.152 16:32:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:25.152 16:32:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:25.152 16:32:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:25.152 16:32:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:25.152 16:32:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:25.152 16:32:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:25.152 16:32:02 -- scripts/common.sh@335 -- # IFS=.-: 00:12:25.152 16:32:02 -- scripts/common.sh@335 -- # read -ra ver1 00:12:25.152 16:32:02 -- scripts/common.sh@336 -- # IFS=.-: 00:12:25.152 16:32:02 -- scripts/common.sh@336 -- # read -ra ver2 00:12:25.152 16:32:02 -- scripts/common.sh@337 -- # local 'op=<' 00:12:25.152 16:32:02 -- scripts/common.sh@339 -- # ver1_l=2 00:12:25.152 16:32:02 -- scripts/common.sh@340 -- # ver2_l=1 00:12:25.152 16:32:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:25.152 16:32:02 -- scripts/common.sh@343 -- # case "$op" in 00:12:25.152 16:32:02 -- scripts/common.sh@344 -- # : 1 00:12:25.152 16:32:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:25.152 16:32:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:25.152 16:32:02 -- scripts/common.sh@364 -- # decimal 1 00:12:25.152 16:32:02 -- scripts/common.sh@352 -- # local d=1 00:12:25.152 16:32:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:25.152 16:32:02 -- scripts/common.sh@354 -- # echo 1 00:12:25.152 16:32:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:25.152 16:32:02 -- scripts/common.sh@365 -- # decimal 2 00:12:25.152 16:32:02 -- scripts/common.sh@352 -- # local d=2 00:12:25.152 16:32:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:25.152 16:32:02 -- scripts/common.sh@354 -- # echo 2 00:12:25.152 16:32:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:25.152 16:32:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:25.152 16:32:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:25.152 16:32:02 -- scripts/common.sh@367 -- # return 0 00:12:25.152 16:32:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:25.152 16:32:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:25.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.152 --rc genhtml_branch_coverage=1 00:12:25.152 --rc genhtml_function_coverage=1 00:12:25.152 --rc genhtml_legend=1 00:12:25.152 --rc geninfo_all_blocks=1 00:12:25.152 --rc geninfo_unexecuted_blocks=1 00:12:25.152 00:12:25.152 ' 00:12:25.152 16:32:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:25.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.152 --rc genhtml_branch_coverage=1 00:12:25.152 --rc genhtml_function_coverage=1 00:12:25.152 --rc genhtml_legend=1 00:12:25.152 --rc geninfo_all_blocks=1 00:12:25.152 --rc geninfo_unexecuted_blocks=1 00:12:25.152 00:12:25.152 ' 00:12:25.152 16:32:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:25.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.152 --rc genhtml_branch_coverage=1 00:12:25.152 --rc genhtml_function_coverage=1 00:12:25.152 --rc genhtml_legend=1 00:12:25.152 --rc geninfo_all_blocks=1 00:12:25.152 --rc geninfo_unexecuted_blocks=1 00:12:25.152 00:12:25.152 ' 00:12:25.152 16:32:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:25.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.152 --rc genhtml_branch_coverage=1 00:12:25.152 --rc genhtml_function_coverage=1 00:12:25.152 --rc genhtml_legend=1 00:12:25.152 --rc geninfo_all_blocks=1 00:12:25.152 --rc geninfo_unexecuted_blocks=1 00:12:25.152 00:12:25.152 ' 00:12:25.152 16:32:02 -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:25.152 16:32:02 -- nvmf/common.sh@7 -- # uname -s 00:12:25.152 16:32:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:25.152 16:32:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:25.152 16:32:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:25.152 16:32:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:25.152 16:32:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:25.152 16:32:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:25.152 16:32:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:25.152 16:32:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:25.152 16:32:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:25.152 16:32:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:25.152 16:32:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:12:25.152 16:32:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:12:25.152 16:32:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:25.152 16:32:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:25.152 16:32:02 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:25.152 16:32:02 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:25.153 16:32:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:25.153 16:32:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:25.153 16:32:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:25.153 16:32:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.153 16:32:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.153 16:32:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.153 16:32:02 -- paths/export.sh@5 -- # export PATH 00:12:25.153 16:32:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.153 16:32:02 -- nvmf/common.sh@46 -- # : 0 00:12:25.153 16:32:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:25.153 16:32:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:25.153 16:32:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:25.153 16:32:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:25.153 16:32:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:25.153 16:32:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:25.153 16:32:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:25.153 16:32:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:25.153 16:32:02 -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:12:25.153 16:32:02 -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:25.153 16:32:02 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:25.153 16:32:02 -- target/invalid.sh@14 -- # target=foobar 00:12:25.153 16:32:02 -- target/invalid.sh@16 -- # RANDOM=0 00:12:25.153 16:32:02 -- target/invalid.sh@34 -- # nvmftestinit 00:12:25.153 16:32:02 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:25.153 16:32:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:25.153 16:32:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:25.153 16:32:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:25.153 16:32:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:25.153 16:32:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:25.153 16:32:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:25.153 16:32:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:25.153 16:32:02 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:25.153 16:32:02 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:25.153 16:32:02 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:25.153 16:32:02 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:25.153 16:32:02 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:25.153 16:32:02 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:25.153 16:32:02 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:25.153 16:32:02 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:25.153 16:32:02 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:25.153 16:32:02 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:25.153 16:32:02 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:25.153 16:32:02 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:25.153 16:32:02 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:25.153 16:32:02 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:25.153 16:32:02 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:25.153 16:32:02 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:25.153 16:32:02 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:25.153 16:32:02 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:25.153 16:32:02 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:25.153 16:32:02 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:25.412 Cannot find device "nvmf_tgt_br" 00:12:25.412 16:32:02 -- nvmf/common.sh@154 -- # true 00:12:25.412 16:32:02 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:25.412 Cannot find device "nvmf_tgt_br2" 00:12:25.412 16:32:02 -- nvmf/common.sh@155 -- # true 00:12:25.412 16:32:02 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:25.412 16:32:02 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:25.412 Cannot find device "nvmf_tgt_br" 00:12:25.412 16:32:02 -- nvmf/common.sh@157 -- # true 00:12:25.412 16:32:02 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:25.412 Cannot find device "nvmf_tgt_br2" 00:12:25.412 16:32:02 -- nvmf/common.sh@158 -- # true 00:12:25.412 16:32:02 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:25.412 16:32:02 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:25.412 16:32:02 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:25.412 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:25.412 16:32:02 -- nvmf/common.sh@161 -- # true 00:12:25.412 16:32:02 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:25.412 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:25.412 16:32:02 -- nvmf/common.sh@162 -- # true 00:12:25.412 16:32:02 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:25.412 16:32:02 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:25.412 16:32:02 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:25.412 16:32:02 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:25.412 16:32:02 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:25.412 16:32:02 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:25.412 16:32:02 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:25.412 16:32:02 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:25.412 16:32:02 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:25.412 16:32:02 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:25.412 16:32:02 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:25.412 16:32:02 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:25.412 16:32:02 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:25.412 16:32:02 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:25.412 16:32:02 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:25.412 16:32:02 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:25.412 16:32:02 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:25.412 16:32:02 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:25.412 16:32:02 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:25.412 16:32:02 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:25.671 16:32:02 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:25.671 16:32:02 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:25.671 16:32:02 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:25.671 16:32:02 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:25.671 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:25.671 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:12:25.671 00:12:25.671 --- 10.0.0.2 ping statistics --- 00:12:25.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:25.671 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:12:25.671 16:32:02 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:25.671 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:25.671 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:12:25.671 00:12:25.671 --- 10.0.0.3 ping statistics --- 00:12:25.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:25.671 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:12:25.671 16:32:02 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:25.671 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:25.671 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:12:25.671 00:12:25.671 --- 10.0.0.1 ping statistics --- 00:12:25.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:25.671 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:12:25.671 16:32:02 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:25.671 16:32:02 -- nvmf/common.sh@421 -- # return 0 00:12:25.671 16:32:02 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:25.671 16:32:02 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:25.671 16:32:02 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:25.671 16:32:02 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:25.671 16:32:02 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:25.671 16:32:02 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:25.671 16:32:02 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:25.671 16:32:02 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:25.671 16:32:02 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:25.671 16:32:02 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:25.671 16:32:02 -- common/autotest_common.sh@10 -- # set +x 00:12:25.671 16:32:02 -- nvmf/common.sh@469 -- # nvmfpid=78611 00:12:25.671 16:32:02 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:25.671 16:32:02 -- nvmf/common.sh@470 -- # waitforlisten 78611 00:12:25.671 16:32:02 -- common/autotest_common.sh@829 -- # '[' -z 78611 ']' 00:12:25.671 16:32:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:25.671 16:32:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:25.671 16:32:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:25.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:25.671 16:32:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:25.671 16:32:02 -- common/autotest_common.sh@10 -- # set +x 00:12:25.671 [2024-11-16 16:32:03.029121] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:25.671 [2024-11-16 16:32:03.029198] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:25.930 [2024-11-16 16:32:03.172218] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:25.930 [2024-11-16 16:32:03.244972] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:25.930 [2024-11-16 16:32:03.245147] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:25.930 [2024-11-16 16:32:03.245163] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:25.930 [2024-11-16 16:32:03.245172] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:25.930 [2024-11-16 16:32:03.245307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:25.930 [2024-11-16 16:32:03.246372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:25.930 [2024-11-16 16:32:03.246442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:25.930 [2024-11-16 16:32:03.246449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.498 16:32:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:26.498 16:32:03 -- common/autotest_common.sh@862 -- # return 0 00:12:26.498 16:32:03 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:26.498 16:32:03 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:26.498 16:32:03 -- common/autotest_common.sh@10 -- # set +x 00:12:26.757 16:32:03 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:26.757 16:32:04 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:26.757 16:32:04 -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode25089 00:12:27.016 [2024-11-16 16:32:04.281023] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:27.016 16:32:04 -- target/invalid.sh@40 -- # out='2024/11/16 16:32:04 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode25089 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:12:27.016 request: 00:12:27.016 { 00:12:27.016 "method": "nvmf_create_subsystem", 00:12:27.016 "params": { 00:12:27.016 "nqn": "nqn.2016-06.io.spdk:cnode25089", 00:12:27.016 "tgt_name": "foobar" 00:12:27.016 } 00:12:27.016 } 00:12:27.016 Got JSON-RPC error response 00:12:27.016 GoRPCClient: error on JSON-RPC call' 00:12:27.016 16:32:04 -- target/invalid.sh@41 -- # [[ 2024/11/16 16:32:04 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode25089 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:12:27.016 request: 00:12:27.016 { 00:12:27.016 "method": "nvmf_create_subsystem", 00:12:27.016 "params": { 00:12:27.016 "nqn": "nqn.2016-06.io.spdk:cnode25089", 00:12:27.016 "tgt_name": "foobar" 00:12:27.016 } 00:12:27.016 } 00:12:27.016 Got JSON-RPC error response 00:12:27.016 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:27.016 16:32:04 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:27.016 16:32:04 -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode14558 00:12:27.274 [2024-11-16 16:32:04.593496] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14558: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:27.274 16:32:04 -- target/invalid.sh@45 -- # out='2024/11/16 16:32:04 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode14558 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:12:27.274 request: 00:12:27.274 { 00:12:27.274 "method": "nvmf_create_subsystem", 00:12:27.274 "params": { 00:12:27.275 "nqn": "nqn.2016-06.io.spdk:cnode14558", 00:12:27.275 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:12:27.275 } 00:12:27.275 } 00:12:27.275 Got JSON-RPC error response 00:12:27.275 GoRPCClient: error on JSON-RPC call' 00:12:27.275 16:32:04 -- target/invalid.sh@46 -- # [[ 2024/11/16 16:32:04 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode14558 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:12:27.275 request: 00:12:27.275 { 00:12:27.275 "method": "nvmf_create_subsystem", 00:12:27.275 "params": { 00:12:27.275 "nqn": "nqn.2016-06.io.spdk:cnode14558", 00:12:27.275 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:12:27.275 } 00:12:27.275 } 00:12:27.275 Got JSON-RPC error response 00:12:27.275 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:27.275 16:32:04 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:27.275 16:32:04 -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode14512 00:12:27.534 [2024-11-16 16:32:04.897905] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14512: invalid model number 'SPDK_Controller' 00:12:27.534 16:32:04 -- target/invalid.sh@50 -- # out='2024/11/16 16:32:04 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode14512], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:12:27.534 request: 00:12:27.534 { 00:12:27.534 "method": "nvmf_create_subsystem", 00:12:27.534 "params": { 00:12:27.534 "nqn": "nqn.2016-06.io.spdk:cnode14512", 00:12:27.534 "model_number": "SPDK_Controller\u001f" 00:12:27.534 } 00:12:27.534 } 00:12:27.534 Got JSON-RPC error response 00:12:27.534 GoRPCClient: error on JSON-RPC call' 00:12:27.534 16:32:04 -- target/invalid.sh@51 -- # [[ 2024/11/16 16:32:04 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode14512], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:12:27.534 request: 00:12:27.534 { 00:12:27.534 "method": "nvmf_create_subsystem", 00:12:27.534 "params": { 00:12:27.534 "nqn": "nqn.2016-06.io.spdk:cnode14512", 00:12:27.534 "model_number": "SPDK_Controller\u001f" 00:12:27.534 } 00:12:27.534 } 00:12:27.534 Got JSON-RPC error response 00:12:27.534 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:27.534 16:32:04 -- target/invalid.sh@54 -- # gen_random_s 21 00:12:27.534 16:32:04 -- target/invalid.sh@19 -- # local length=21 ll 00:12:27.535 16:32:04 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:27.535 16:32:04 -- target/invalid.sh@21 -- # local chars 00:12:27.535 16:32:04 -- target/invalid.sh@22 -- # local string 00:12:27.535 16:32:04 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:27.535 16:32:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.535 16:32:04 -- target/invalid.sh@25 -- # printf %x 121 00:12:27.535 16:32:04 -- target/invalid.sh@25 -- # echo -e '\x79' 00:12:27.535 16:32:04 -- target/invalid.sh@25 -- # string+=y 00:12:27.535 16:32:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.535 16:32:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.535 16:32:04 -- target/invalid.sh@25 -- # printf %x 65 00:12:27.535 16:32:04 -- target/invalid.sh@25 -- # echo -e '\x41' 00:12:27.535 16:32:04 -- target/invalid.sh@25 -- # string+=A 00:12:27.535 16:32:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.535 16:32:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.535 16:32:04 -- target/invalid.sh@25 -- # printf %x 38 00:12:27.535 16:32:04 -- target/invalid.sh@25 -- # echo -e '\x26' 00:12:27.535 16:32:04 -- target/invalid.sh@25 -- # string+='&' 00:12:27.535 16:32:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.535 16:32:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.535 16:32:04 -- target/invalid.sh@25 -- # printf %x 46 00:12:27.535 16:32:04 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:12:27.535 16:32:04 -- target/invalid.sh@25 -- # string+=. 00:12:27.535 16:32:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.535 16:32:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.535 16:32:04 -- target/invalid.sh@25 -- # printf %x 126 00:12:27.535 16:32:04 -- target/invalid.sh@25 -- # echo -e '\x7e' 00:12:27.535 16:32:04 -- target/invalid.sh@25 -- # string+='~' 00:12:27.535 16:32:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.535 16:32:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.535 16:32:04 -- target/invalid.sh@25 -- # printf %x 82 00:12:27.535 16:32:04 -- target/invalid.sh@25 -- # echo -e '\x52' 00:12:27.535 16:32:04 -- target/invalid.sh@25 -- # string+=R 00:12:27.535 16:32:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.535 16:32:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.535 16:32:04 -- target/invalid.sh@25 -- # printf %x 39 00:12:27.535 16:32:04 -- target/invalid.sh@25 -- # echo -e '\x27' 00:12:27.535 16:32:04 -- target/invalid.sh@25 -- # string+=\' 00:12:27.535 16:32:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.535 16:32:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.535 16:32:04 -- target/invalid.sh@25 -- # printf %x 66 00:12:27.535 16:32:04 -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:27.535 16:32:04 -- target/invalid.sh@25 -- # string+=B 00:12:27.535 16:32:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.535 16:32:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.535 16:32:04 -- target/invalid.sh@25 -- # printf %x 64 00:12:27.535 16:32:04 -- target/invalid.sh@25 -- # echo -e '\x40' 00:12:27.535 16:32:04 -- target/invalid.sh@25 -- # string+=@ 00:12:27.535 16:32:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.535 16:32:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.535 16:32:04 -- target/invalid.sh@25 -- # printf %x 84 00:12:27.535 16:32:04 -- target/invalid.sh@25 -- # echo -e '\x54' 00:12:27.535 16:32:04 -- target/invalid.sh@25 -- # string+=T 00:12:27.535 16:32:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.535 16:32:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.535 16:32:04 -- target/invalid.sh@25 -- # printf %x 79 00:12:27.535 16:32:04 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:12:27.535 16:32:04 -- target/invalid.sh@25 -- # string+=O 00:12:27.535 16:32:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.535 16:32:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.535 16:32:04 -- target/invalid.sh@25 -- # printf %x 61 00:12:27.535 16:32:04 -- target/invalid.sh@25 -- # echo -e '\x3d' 00:12:27.535 16:32:04 -- target/invalid.sh@25 -- # string+== 00:12:27.535 16:32:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.535 16:32:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.535 16:32:04 -- target/invalid.sh@25 -- # printf %x 82 00:12:27.535 16:32:04 -- target/invalid.sh@25 -- # echo -e '\x52' 00:12:27.535 16:32:04 -- target/invalid.sh@25 -- # string+=R 00:12:27.535 16:32:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.535 16:32:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.535 16:32:04 -- target/invalid.sh@25 -- # printf %x 35 00:12:27.535 16:32:04 -- target/invalid.sh@25 -- # echo -e '\x23' 00:12:27.535 16:32:04 -- target/invalid.sh@25 -- # string+='#' 00:12:27.535 16:32:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.535 16:32:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.535 16:32:04 -- target/invalid.sh@25 -- # printf %x 105 00:12:27.535 16:32:04 -- target/invalid.sh@25 -- # echo -e '\x69' 00:12:27.535 16:32:04 -- target/invalid.sh@25 -- # string+=i 00:12:27.535 16:32:04 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.535 16:32:04 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.535 16:32:04 -- target/invalid.sh@25 -- # printf %x 54 00:12:27.535 16:32:05 -- target/invalid.sh@25 -- # echo -e '\x36' 00:12:27.535 16:32:05 -- target/invalid.sh@25 -- # string+=6 00:12:27.535 16:32:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.535 16:32:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.535 16:32:05 -- target/invalid.sh@25 -- # printf %x 95 00:12:27.535 16:32:05 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:12:27.535 16:32:05 -- target/invalid.sh@25 -- # string+=_ 00:12:27.535 16:32:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.535 16:32:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.535 16:32:05 -- target/invalid.sh@25 -- # printf %x 117 00:12:27.535 16:32:05 -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:27.535 16:32:05 -- target/invalid.sh@25 -- # string+=u 00:12:27.535 16:32:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.535 16:32:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.535 16:32:05 -- target/invalid.sh@25 -- # printf %x 54 00:12:27.535 16:32:05 -- target/invalid.sh@25 -- # echo -e '\x36' 00:12:27.535 16:32:05 -- target/invalid.sh@25 -- # string+=6 00:12:27.535 16:32:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.535 16:32:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.794 16:32:05 -- target/invalid.sh@25 -- # printf %x 71 00:12:27.794 16:32:05 -- target/invalid.sh@25 -- # echo -e '\x47' 00:12:27.794 16:32:05 -- target/invalid.sh@25 -- # string+=G 00:12:27.794 16:32:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.794 16:32:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.794 16:32:05 -- target/invalid.sh@25 -- # printf %x 60 00:12:27.794 16:32:05 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:12:27.794 16:32:05 -- target/invalid.sh@25 -- # string+='<' 00:12:27.794 16:32:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:27.794 16:32:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:27.794 16:32:05 -- target/invalid.sh@28 -- # [[ y == \- ]] 00:12:27.794 16:32:05 -- target/invalid.sh@31 -- # echo 'yA&.~R'\''B@TO=R#i6_u6G<' 00:12:27.795 16:32:05 -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s 'yA&.~R'\''B@TO=R#i6_u6G<' nqn.2016-06.io.spdk:cnode7256 00:12:28.054 [2024-11-16 16:32:05.306512] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7256: invalid serial number 'yA&.~R'B@TO=R#i6_u6G<' 00:12:28.054 16:32:05 -- target/invalid.sh@54 -- # out='2024/11/16 16:32:05 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode7256 serial_number:yA&.~R'\''B@TO=R#i6_u6G<], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN yA&.~R'\''B@TO=R#i6_u6G< 00:12:28.054 request: 00:12:28.054 { 00:12:28.055 "method": "nvmf_create_subsystem", 00:12:28.055 "params": { 00:12:28.055 "nqn": "nqn.2016-06.io.spdk:cnode7256", 00:12:28.055 "serial_number": "yA&.~R'\''B@TO=R#i6_u6G<" 00:12:28.055 } 00:12:28.055 } 00:12:28.055 Got JSON-RPC error response 00:12:28.055 GoRPCClient: error on JSON-RPC call' 00:12:28.055 16:32:05 -- target/invalid.sh@55 -- # [[ 2024/11/16 16:32:05 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode7256 serial_number:yA&.~R'B@TO=R#i6_u6G<], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN yA&.~R'B@TO=R#i6_u6G< 00:12:28.055 request: 00:12:28.055 { 00:12:28.055 "method": "nvmf_create_subsystem", 00:12:28.055 "params": { 00:12:28.055 "nqn": "nqn.2016-06.io.spdk:cnode7256", 00:12:28.055 "serial_number": "yA&.~R'B@TO=R#i6_u6G<" 00:12:28.055 } 00:12:28.055 } 00:12:28.055 Got JSON-RPC error response 00:12:28.055 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:28.055 16:32:05 -- target/invalid.sh@58 -- # gen_random_s 41 00:12:28.055 16:32:05 -- target/invalid.sh@19 -- # local length=41 ll 00:12:28.055 16:32:05 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:28.055 16:32:05 -- target/invalid.sh@21 -- # local chars 00:12:28.055 16:32:05 -- target/invalid.sh@22 -- # local string 00:12:28.055 16:32:05 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:28.055 16:32:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.055 16:32:05 -- target/invalid.sh@25 -- # printf %x 37 00:12:28.055 16:32:05 -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:28.055 16:32:05 -- target/invalid.sh@25 -- # string+=% 00:12:28.055 16:32:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.055 16:32:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.055 16:32:05 -- target/invalid.sh@25 -- # printf %x 75 00:12:28.055 16:32:05 -- target/invalid.sh@25 -- # echo -e '\x4b' 00:12:28.055 16:32:05 -- target/invalid.sh@25 -- # string+=K 00:12:28.055 16:32:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.055 16:32:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.055 16:32:05 -- target/invalid.sh@25 -- # printf %x 61 00:12:28.055 16:32:05 -- target/invalid.sh@25 -- # echo -e '\x3d' 00:12:28.055 16:32:05 -- target/invalid.sh@25 -- # string+== 00:12:28.055 16:32:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.055 16:32:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.055 16:32:05 -- target/invalid.sh@25 -- # printf %x 70 00:12:28.055 16:32:05 -- target/invalid.sh@25 -- # echo -e '\x46' 00:12:28.055 16:32:05 -- target/invalid.sh@25 -- # string+=F 00:12:28.055 16:32:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.055 16:32:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.055 16:32:05 -- target/invalid.sh@25 -- # printf %x 50 00:12:28.055 16:32:05 -- target/invalid.sh@25 -- # echo -e '\x32' 00:12:28.055 16:32:05 -- target/invalid.sh@25 -- # string+=2 00:12:28.055 16:32:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.055 16:32:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.055 16:32:05 -- target/invalid.sh@25 -- # printf %x 51 00:12:28.055 16:32:05 -- target/invalid.sh@25 -- # echo -e '\x33' 00:12:28.055 16:32:05 -- target/invalid.sh@25 -- # string+=3 00:12:28.055 16:32:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.055 16:32:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.055 16:32:05 -- target/invalid.sh@25 -- # printf %x 61 00:12:28.055 16:32:05 -- target/invalid.sh@25 -- # echo -e '\x3d' 00:12:28.055 16:32:05 -- target/invalid.sh@25 -- # string+== 00:12:28.055 16:32:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.055 16:32:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.055 16:32:05 -- target/invalid.sh@25 -- # printf %x 83 00:12:28.055 16:32:05 -- target/invalid.sh@25 -- # echo -e '\x53' 00:12:28.055 16:32:05 -- target/invalid.sh@25 -- # string+=S 00:12:28.055 16:32:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.055 16:32:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.055 16:32:05 -- target/invalid.sh@25 -- # printf %x 92 00:12:28.055 16:32:05 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:12:28.055 16:32:05 -- target/invalid.sh@25 -- # string+='\' 00:12:28.055 16:32:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.055 16:32:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.055 16:32:05 -- target/invalid.sh@25 -- # printf %x 72 00:12:28.055 16:32:05 -- target/invalid.sh@25 -- # echo -e '\x48' 00:12:28.055 16:32:05 -- target/invalid.sh@25 -- # string+=H 00:12:28.055 16:32:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.055 16:32:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.055 16:32:05 -- target/invalid.sh@25 -- # printf %x 42 00:12:28.055 16:32:05 -- target/invalid.sh@25 -- # echo -e '\x2a' 00:12:28.055 16:32:05 -- target/invalid.sh@25 -- # string+='*' 00:12:28.055 16:32:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.055 16:32:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.055 16:32:05 -- target/invalid.sh@25 -- # printf %x 94 00:12:28.055 16:32:05 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:12:28.055 16:32:05 -- target/invalid.sh@25 -- # string+='^' 00:12:28.055 16:32:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.055 16:32:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.055 16:32:05 -- target/invalid.sh@25 -- # printf %x 90 00:12:28.055 16:32:05 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:12:28.055 16:32:05 -- target/invalid.sh@25 -- # string+=Z 00:12:28.055 16:32:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.055 16:32:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.055 16:32:05 -- target/invalid.sh@25 -- # printf %x 117 00:12:28.055 16:32:05 -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:28.055 16:32:05 -- target/invalid.sh@25 -- # string+=u 00:12:28.055 16:32:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.055 16:32:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.055 16:32:05 -- target/invalid.sh@25 -- # printf %x 94 00:12:28.055 16:32:05 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:12:28.055 16:32:05 -- target/invalid.sh@25 -- # string+='^' 00:12:28.055 16:32:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.055 16:32:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.055 16:32:05 -- target/invalid.sh@25 -- # printf %x 60 00:12:28.055 16:32:05 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:12:28.055 16:32:05 -- target/invalid.sh@25 -- # string+='<' 00:12:28.055 16:32:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.055 16:32:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.055 16:32:05 -- target/invalid.sh@25 -- # printf %x 81 00:12:28.055 16:32:05 -- target/invalid.sh@25 -- # echo -e '\x51' 00:12:28.055 16:32:05 -- target/invalid.sh@25 -- # string+=Q 00:12:28.055 16:32:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.055 16:32:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.055 16:32:05 -- target/invalid.sh@25 -- # printf %x 39 00:12:28.055 16:32:05 -- target/invalid.sh@25 -- # echo -e '\x27' 00:12:28.055 16:32:05 -- target/invalid.sh@25 -- # string+=\' 00:12:28.055 16:32:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.055 16:32:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.055 16:32:05 -- target/invalid.sh@25 -- # printf %x 100 00:12:28.055 16:32:05 -- target/invalid.sh@25 -- # echo -e '\x64' 00:12:28.055 16:32:05 -- target/invalid.sh@25 -- # string+=d 00:12:28.055 16:32:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.055 16:32:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.055 16:32:05 -- target/invalid.sh@25 -- # printf %x 45 00:12:28.055 16:32:05 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:12:28.055 16:32:05 -- target/invalid.sh@25 -- # string+=- 00:12:28.055 16:32:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.055 16:32:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.055 16:32:05 -- target/invalid.sh@25 -- # printf %x 86 00:12:28.055 16:32:05 -- target/invalid.sh@25 -- # echo -e '\x56' 00:12:28.055 16:32:05 -- target/invalid.sh@25 -- # string+=V 00:12:28.055 16:32:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.055 16:32:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.055 16:32:05 -- target/invalid.sh@25 -- # printf %x 75 00:12:28.055 16:32:05 -- target/invalid.sh@25 -- # echo -e '\x4b' 00:12:28.055 16:32:05 -- target/invalid.sh@25 -- # string+=K 00:12:28.055 16:32:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.055 16:32:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.055 16:32:05 -- target/invalid.sh@25 -- # printf %x 124 00:12:28.055 16:32:05 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:12:28.055 16:32:05 -- target/invalid.sh@25 -- # string+='|' 00:12:28.055 16:32:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.055 16:32:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.055 16:32:05 -- target/invalid.sh@25 -- # printf %x 52 00:12:28.055 16:32:05 -- target/invalid.sh@25 -- # echo -e '\x34' 00:12:28.055 16:32:05 -- target/invalid.sh@25 -- # string+=4 00:12:28.055 16:32:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.055 16:32:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.055 16:32:05 -- target/invalid.sh@25 -- # printf %x 37 00:12:28.055 16:32:05 -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:28.055 16:32:05 -- target/invalid.sh@25 -- # string+=% 00:12:28.055 16:32:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.056 16:32:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.056 16:32:05 -- target/invalid.sh@25 -- # printf %x 96 00:12:28.056 16:32:05 -- target/invalid.sh@25 -- # echo -e '\x60' 00:12:28.056 16:32:05 -- target/invalid.sh@25 -- # string+='`' 00:12:28.056 16:32:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.056 16:32:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.056 16:32:05 -- target/invalid.sh@25 -- # printf %x 52 00:12:28.056 16:32:05 -- target/invalid.sh@25 -- # echo -e '\x34' 00:12:28.056 16:32:05 -- target/invalid.sh@25 -- # string+=4 00:12:28.056 16:32:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.056 16:32:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.056 16:32:05 -- target/invalid.sh@25 -- # printf %x 90 00:12:28.056 16:32:05 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:12:28.056 16:32:05 -- target/invalid.sh@25 -- # string+=Z 00:12:28.056 16:32:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.056 16:32:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.056 16:32:05 -- target/invalid.sh@25 -- # printf %x 101 00:12:28.056 16:32:05 -- target/invalid.sh@25 -- # echo -e '\x65' 00:12:28.056 16:32:05 -- target/invalid.sh@25 -- # string+=e 00:12:28.056 16:32:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.056 16:32:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.056 16:32:05 -- target/invalid.sh@25 -- # printf %x 113 00:12:28.056 16:32:05 -- target/invalid.sh@25 -- # echo -e '\x71' 00:12:28.056 16:32:05 -- target/invalid.sh@25 -- # string+=q 00:12:28.056 16:32:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.056 16:32:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.056 16:32:05 -- target/invalid.sh@25 -- # printf %x 73 00:12:28.056 16:32:05 -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:28.056 16:32:05 -- target/invalid.sh@25 -- # string+=I 00:12:28.056 16:32:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.056 16:32:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.056 16:32:05 -- target/invalid.sh@25 -- # printf %x 33 00:12:28.056 16:32:05 -- target/invalid.sh@25 -- # echo -e '\x21' 00:12:28.056 16:32:05 -- target/invalid.sh@25 -- # string+='!' 00:12:28.056 16:32:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.056 16:32:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.056 16:32:05 -- target/invalid.sh@25 -- # printf %x 36 00:12:28.056 16:32:05 -- target/invalid.sh@25 -- # echo -e '\x24' 00:12:28.056 16:32:05 -- target/invalid.sh@25 -- # string+='$' 00:12:28.056 16:32:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.056 16:32:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.056 16:32:05 -- target/invalid.sh@25 -- # printf %x 79 00:12:28.056 16:32:05 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:12:28.056 16:32:05 -- target/invalid.sh@25 -- # string+=O 00:12:28.056 16:32:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.056 16:32:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.056 16:32:05 -- target/invalid.sh@25 -- # printf %x 126 00:12:28.056 16:32:05 -- target/invalid.sh@25 -- # echo -e '\x7e' 00:12:28.056 16:32:05 -- target/invalid.sh@25 -- # string+='~' 00:12:28.056 16:32:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.056 16:32:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.316 16:32:05 -- target/invalid.sh@25 -- # printf %x 102 00:12:28.316 16:32:05 -- target/invalid.sh@25 -- # echo -e '\x66' 00:12:28.316 16:32:05 -- target/invalid.sh@25 -- # string+=f 00:12:28.316 16:32:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.316 16:32:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.316 16:32:05 -- target/invalid.sh@25 -- # printf %x 96 00:12:28.316 16:32:05 -- target/invalid.sh@25 -- # echo -e '\x60' 00:12:28.316 16:32:05 -- target/invalid.sh@25 -- # string+='`' 00:12:28.316 16:32:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.316 16:32:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.316 16:32:05 -- target/invalid.sh@25 -- # printf %x 35 00:12:28.316 16:32:05 -- target/invalid.sh@25 -- # echo -e '\x23' 00:12:28.316 16:32:05 -- target/invalid.sh@25 -- # string+='#' 00:12:28.316 16:32:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.316 16:32:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.316 16:32:05 -- target/invalid.sh@25 -- # printf %x 109 00:12:28.316 16:32:05 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:28.316 16:32:05 -- target/invalid.sh@25 -- # string+=m 00:12:28.316 16:32:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.316 16:32:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.316 16:32:05 -- target/invalid.sh@25 -- # printf %x 105 00:12:28.316 16:32:05 -- target/invalid.sh@25 -- # echo -e '\x69' 00:12:28.316 16:32:05 -- target/invalid.sh@25 -- # string+=i 00:12:28.316 16:32:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.316 16:32:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.316 16:32:05 -- target/invalid.sh@25 -- # printf %x 63 00:12:28.316 16:32:05 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:12:28.316 16:32:05 -- target/invalid.sh@25 -- # string+='?' 00:12:28.316 16:32:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:28.316 16:32:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:28.316 16:32:05 -- target/invalid.sh@28 -- # [[ % == \- ]] 00:12:28.316 16:32:05 -- target/invalid.sh@31 -- # echo '%K=F23=S\H*^Zu^ /dev/null' 00:12:31.167 16:32:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.167 16:32:08 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:31.167 00:12:31.167 real 0m6.062s 00:12:31.167 user 0m23.942s 00:12:31.167 sys 0m1.430s 00:12:31.167 ************************************ 00:12:31.167 END TEST nvmf_invalid 00:12:31.167 ************************************ 00:12:31.167 16:32:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:31.167 16:32:08 -- common/autotest_common.sh@10 -- # set +x 00:12:31.167 16:32:08 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:31.167 16:32:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:31.167 16:32:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:31.167 16:32:08 -- common/autotest_common.sh@10 -- # set +x 00:12:31.167 ************************************ 00:12:31.167 START TEST nvmf_abort 00:12:31.167 ************************************ 00:12:31.167 16:32:08 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:31.167 * Looking for test storage... 00:12:31.167 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:31.167 16:32:08 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:31.167 16:32:08 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:31.167 16:32:08 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:31.426 16:32:08 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:31.426 16:32:08 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:31.426 16:32:08 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:31.426 16:32:08 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:31.426 16:32:08 -- scripts/common.sh@335 -- # IFS=.-: 00:12:31.426 16:32:08 -- scripts/common.sh@335 -- # read -ra ver1 00:12:31.426 16:32:08 -- scripts/common.sh@336 -- # IFS=.-: 00:12:31.426 16:32:08 -- scripts/common.sh@336 -- # read -ra ver2 00:12:31.426 16:32:08 -- scripts/common.sh@337 -- # local 'op=<' 00:12:31.426 16:32:08 -- scripts/common.sh@339 -- # ver1_l=2 00:12:31.426 16:32:08 -- scripts/common.sh@340 -- # ver2_l=1 00:12:31.426 16:32:08 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:31.426 16:32:08 -- scripts/common.sh@343 -- # case "$op" in 00:12:31.426 16:32:08 -- scripts/common.sh@344 -- # : 1 00:12:31.426 16:32:08 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:31.426 16:32:08 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:31.426 16:32:08 -- scripts/common.sh@364 -- # decimal 1 00:12:31.426 16:32:08 -- scripts/common.sh@352 -- # local d=1 00:12:31.426 16:32:08 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:31.426 16:32:08 -- scripts/common.sh@354 -- # echo 1 00:12:31.426 16:32:08 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:31.426 16:32:08 -- scripts/common.sh@365 -- # decimal 2 00:12:31.426 16:32:08 -- scripts/common.sh@352 -- # local d=2 00:12:31.426 16:32:08 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:31.426 16:32:08 -- scripts/common.sh@354 -- # echo 2 00:12:31.426 16:32:08 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:31.426 16:32:08 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:31.426 16:32:08 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:31.426 16:32:08 -- scripts/common.sh@367 -- # return 0 00:12:31.426 16:32:08 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:31.426 16:32:08 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:31.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.426 --rc genhtml_branch_coverage=1 00:12:31.426 --rc genhtml_function_coverage=1 00:12:31.426 --rc genhtml_legend=1 00:12:31.426 --rc geninfo_all_blocks=1 00:12:31.426 --rc geninfo_unexecuted_blocks=1 00:12:31.426 00:12:31.426 ' 00:12:31.426 16:32:08 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:31.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.426 --rc genhtml_branch_coverage=1 00:12:31.426 --rc genhtml_function_coverage=1 00:12:31.426 --rc genhtml_legend=1 00:12:31.426 --rc geninfo_all_blocks=1 00:12:31.426 --rc geninfo_unexecuted_blocks=1 00:12:31.426 00:12:31.426 ' 00:12:31.426 16:32:08 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:31.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.426 --rc genhtml_branch_coverage=1 00:12:31.426 --rc genhtml_function_coverage=1 00:12:31.426 --rc genhtml_legend=1 00:12:31.426 --rc geninfo_all_blocks=1 00:12:31.426 --rc geninfo_unexecuted_blocks=1 00:12:31.426 00:12:31.426 ' 00:12:31.426 16:32:08 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:31.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.426 --rc genhtml_branch_coverage=1 00:12:31.426 --rc genhtml_function_coverage=1 00:12:31.426 --rc genhtml_legend=1 00:12:31.426 --rc geninfo_all_blocks=1 00:12:31.426 --rc geninfo_unexecuted_blocks=1 00:12:31.426 00:12:31.426 ' 00:12:31.426 16:32:08 -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:31.426 16:32:08 -- nvmf/common.sh@7 -- # uname -s 00:12:31.426 16:32:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:31.426 16:32:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:31.426 16:32:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:31.426 16:32:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:31.426 16:32:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:31.426 16:32:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:31.426 16:32:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:31.426 16:32:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:31.426 16:32:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:31.426 16:32:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:31.426 16:32:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:12:31.426 16:32:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:12:31.426 16:32:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:31.426 16:32:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:31.426 16:32:08 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:31.426 16:32:08 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:31.426 16:32:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:31.426 16:32:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:31.426 16:32:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:31.426 16:32:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.426 16:32:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.426 16:32:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.426 16:32:08 -- paths/export.sh@5 -- # export PATH 00:12:31.426 16:32:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.426 16:32:08 -- nvmf/common.sh@46 -- # : 0 00:12:31.426 16:32:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:31.426 16:32:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:31.426 16:32:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:31.426 16:32:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:31.426 16:32:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:31.426 16:32:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:31.426 16:32:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:31.426 16:32:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:31.426 16:32:08 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:31.426 16:32:08 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:12:31.426 16:32:08 -- target/abort.sh@14 -- # nvmftestinit 00:12:31.426 16:32:08 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:31.426 16:32:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:31.426 16:32:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:31.426 16:32:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:31.426 16:32:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:31.426 16:32:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:31.427 16:32:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:31.427 16:32:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.427 16:32:08 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:31.427 16:32:08 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:31.427 16:32:08 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:31.427 16:32:08 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:31.427 16:32:08 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:31.427 16:32:08 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:31.427 16:32:08 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:31.427 16:32:08 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:31.427 16:32:08 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:31.427 16:32:08 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:31.427 16:32:08 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:31.427 16:32:08 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:31.427 16:32:08 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:31.427 16:32:08 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:31.427 16:32:08 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:31.427 16:32:08 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:31.427 16:32:08 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:31.427 16:32:08 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:31.427 16:32:08 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:31.427 16:32:08 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:31.427 Cannot find device "nvmf_tgt_br" 00:12:31.427 16:32:08 -- nvmf/common.sh@154 -- # true 00:12:31.427 16:32:08 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:31.427 Cannot find device "nvmf_tgt_br2" 00:12:31.427 16:32:08 -- nvmf/common.sh@155 -- # true 00:12:31.427 16:32:08 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:31.427 16:32:08 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:31.427 Cannot find device "nvmf_tgt_br" 00:12:31.427 16:32:08 -- nvmf/common.sh@157 -- # true 00:12:31.427 16:32:08 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:31.427 Cannot find device "nvmf_tgt_br2" 00:12:31.427 16:32:08 -- nvmf/common.sh@158 -- # true 00:12:31.427 16:32:08 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:31.427 16:32:08 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:31.427 16:32:08 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:31.427 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:31.427 16:32:08 -- nvmf/common.sh@161 -- # true 00:12:31.427 16:32:08 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:31.427 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:31.427 16:32:08 -- nvmf/common.sh@162 -- # true 00:12:31.427 16:32:08 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:31.427 16:32:08 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:31.427 16:32:08 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:31.427 16:32:08 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:31.427 16:32:08 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:31.427 16:32:08 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:31.685 16:32:08 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:31.685 16:32:08 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:31.685 16:32:08 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:31.685 16:32:08 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:31.685 16:32:08 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:31.685 16:32:08 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:31.685 16:32:08 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:31.685 16:32:08 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:31.685 16:32:08 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:31.685 16:32:08 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:31.685 16:32:08 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:31.685 16:32:08 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:31.685 16:32:08 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:31.685 16:32:08 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:31.685 16:32:09 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:31.685 16:32:09 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:31.685 16:32:09 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:31.685 16:32:09 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:31.685 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:31.685 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:12:31.685 00:12:31.685 --- 10.0.0.2 ping statistics --- 00:12:31.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.685 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:12:31.685 16:32:09 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:31.685 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:31.685 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.091 ms 00:12:31.685 00:12:31.685 --- 10.0.0.3 ping statistics --- 00:12:31.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.685 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:12:31.685 16:32:09 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:31.685 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:31.685 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.016 ms 00:12:31.685 00:12:31.685 --- 10.0.0.1 ping statistics --- 00:12:31.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.685 rtt min/avg/max/mdev = 0.016/0.016/0.016/0.000 ms 00:12:31.685 16:32:09 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:31.685 16:32:09 -- nvmf/common.sh@421 -- # return 0 00:12:31.685 16:32:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:31.685 16:32:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:31.685 16:32:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:31.685 16:32:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:31.685 16:32:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:31.685 16:32:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:31.685 16:32:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:31.685 16:32:09 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:12:31.685 16:32:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:31.685 16:32:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:31.685 16:32:09 -- common/autotest_common.sh@10 -- # set +x 00:12:31.685 16:32:09 -- nvmf/common.sh@469 -- # nvmfpid=79128 00:12:31.685 16:32:09 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:31.685 16:32:09 -- nvmf/common.sh@470 -- # waitforlisten 79128 00:12:31.685 16:32:09 -- common/autotest_common.sh@829 -- # '[' -z 79128 ']' 00:12:31.685 16:32:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:31.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:31.685 16:32:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:31.685 16:32:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:31.685 16:32:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:31.685 16:32:09 -- common/autotest_common.sh@10 -- # set +x 00:12:31.685 [2024-11-16 16:32:09.129215] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:31.685 [2024-11-16 16:32:09.129988] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:31.942 [2024-11-16 16:32:09.270754] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:31.942 [2024-11-16 16:32:09.332242] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:31.942 [2024-11-16 16:32:09.332394] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:31.942 [2024-11-16 16:32:09.332434] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:31.942 [2024-11-16 16:32:09.332459] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:31.942 [2024-11-16 16:32:09.333669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:31.942 [2024-11-16 16:32:09.333855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:31.942 [2024-11-16 16:32:09.333860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:32.875 16:32:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:32.875 16:32:10 -- common/autotest_common.sh@862 -- # return 0 00:12:32.875 16:32:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:32.875 16:32:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:32.875 16:32:10 -- common/autotest_common.sh@10 -- # set +x 00:12:32.875 16:32:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:32.875 16:32:10 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:12:32.875 16:32:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.875 16:32:10 -- common/autotest_common.sh@10 -- # set +x 00:12:32.875 [2024-11-16 16:32:10.070263] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:32.875 16:32:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.875 16:32:10 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:12:32.875 16:32:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.875 16:32:10 -- common/autotest_common.sh@10 -- # set +x 00:12:32.875 Malloc0 00:12:32.875 16:32:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.875 16:32:10 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:32.875 16:32:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.875 16:32:10 -- common/autotest_common.sh@10 -- # set +x 00:12:32.875 Delay0 00:12:32.875 16:32:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.875 16:32:10 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:32.875 16:32:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.875 16:32:10 -- common/autotest_common.sh@10 -- # set +x 00:12:32.875 16:32:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.875 16:32:10 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:12:32.875 16:32:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.875 16:32:10 -- common/autotest_common.sh@10 -- # set +x 00:12:32.875 16:32:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.875 16:32:10 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:32.875 16:32:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.875 16:32:10 -- common/autotest_common.sh@10 -- # set +x 00:12:32.875 [2024-11-16 16:32:10.148813] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:32.875 16:32:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.875 16:32:10 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:32.875 16:32:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.875 16:32:10 -- common/autotest_common.sh@10 -- # set +x 00:12:32.875 16:32:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.875 16:32:10 -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:12:32.875 [2024-11-16 16:32:10.314672] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:35.410 Initializing NVMe Controllers 00:12:35.410 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:12:35.410 controller IO queue size 128 less than required 00:12:35.410 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:12:35.410 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:12:35.411 Initialization complete. Launching workers. 00:12:35.411 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 42489 00:12:35.411 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 42554, failed to submit 62 00:12:35.411 success 42489, unsuccess 65, failed 0 00:12:35.411 16:32:12 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:35.411 16:32:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.411 16:32:12 -- common/autotest_common.sh@10 -- # set +x 00:12:35.411 16:32:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.411 16:32:12 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:12:35.411 16:32:12 -- target/abort.sh@38 -- # nvmftestfini 00:12:35.411 16:32:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:35.411 16:32:12 -- nvmf/common.sh@116 -- # sync 00:12:35.411 16:32:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:35.411 16:32:12 -- nvmf/common.sh@119 -- # set +e 00:12:35.411 16:32:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:35.411 16:32:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:35.411 rmmod nvme_tcp 00:12:35.411 rmmod nvme_fabrics 00:12:35.411 rmmod nvme_keyring 00:12:35.411 16:32:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:35.411 16:32:12 -- nvmf/common.sh@123 -- # set -e 00:12:35.411 16:32:12 -- nvmf/common.sh@124 -- # return 0 00:12:35.411 16:32:12 -- nvmf/common.sh@477 -- # '[' -n 79128 ']' 00:12:35.411 16:32:12 -- nvmf/common.sh@478 -- # killprocess 79128 00:12:35.411 16:32:12 -- common/autotest_common.sh@936 -- # '[' -z 79128 ']' 00:12:35.411 16:32:12 -- common/autotest_common.sh@940 -- # kill -0 79128 00:12:35.411 16:32:12 -- common/autotest_common.sh@941 -- # uname 00:12:35.411 16:32:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:35.411 16:32:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79128 00:12:35.411 16:32:12 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:35.411 16:32:12 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:35.411 killing process with pid 79128 00:12:35.411 16:32:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79128' 00:12:35.411 16:32:12 -- common/autotest_common.sh@955 -- # kill 79128 00:12:35.411 16:32:12 -- common/autotest_common.sh@960 -- # wait 79128 00:12:35.411 16:32:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:35.411 16:32:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:35.411 16:32:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:35.411 16:32:12 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:35.411 16:32:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:35.411 16:32:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:35.411 16:32:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:35.411 16:32:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:35.411 16:32:12 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:35.411 00:12:35.411 real 0m4.261s 00:12:35.411 user 0m12.131s 00:12:35.411 sys 0m1.002s 00:12:35.411 16:32:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:35.411 16:32:12 -- common/autotest_common.sh@10 -- # set +x 00:12:35.411 ************************************ 00:12:35.411 END TEST nvmf_abort 00:12:35.411 ************************************ 00:12:35.411 16:32:12 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:12:35.411 16:32:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:35.411 16:32:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:35.411 16:32:12 -- common/autotest_common.sh@10 -- # set +x 00:12:35.411 ************************************ 00:12:35.411 START TEST nvmf_ns_hotplug_stress 00:12:35.411 ************************************ 00:12:35.411 16:32:12 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:12:35.671 * Looking for test storage... 00:12:35.671 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:35.671 16:32:12 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:35.671 16:32:12 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:35.671 16:32:12 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:35.671 16:32:12 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:35.671 16:32:12 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:35.671 16:32:12 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:35.671 16:32:12 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:35.671 16:32:12 -- scripts/common.sh@335 -- # IFS=.-: 00:12:35.671 16:32:12 -- scripts/common.sh@335 -- # read -ra ver1 00:12:35.671 16:32:12 -- scripts/common.sh@336 -- # IFS=.-: 00:12:35.671 16:32:12 -- scripts/common.sh@336 -- # read -ra ver2 00:12:35.671 16:32:12 -- scripts/common.sh@337 -- # local 'op=<' 00:12:35.671 16:32:12 -- scripts/common.sh@339 -- # ver1_l=2 00:12:35.671 16:32:12 -- scripts/common.sh@340 -- # ver2_l=1 00:12:35.671 16:32:12 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:35.671 16:32:12 -- scripts/common.sh@343 -- # case "$op" in 00:12:35.671 16:32:12 -- scripts/common.sh@344 -- # : 1 00:12:35.671 16:32:12 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:35.671 16:32:12 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:35.671 16:32:12 -- scripts/common.sh@364 -- # decimal 1 00:12:35.671 16:32:12 -- scripts/common.sh@352 -- # local d=1 00:12:35.671 16:32:12 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:35.671 16:32:12 -- scripts/common.sh@354 -- # echo 1 00:12:35.671 16:32:12 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:35.671 16:32:12 -- scripts/common.sh@365 -- # decimal 2 00:12:35.671 16:32:13 -- scripts/common.sh@352 -- # local d=2 00:12:35.671 16:32:13 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:35.671 16:32:13 -- scripts/common.sh@354 -- # echo 2 00:12:35.671 16:32:13 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:35.671 16:32:13 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:35.671 16:32:13 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:35.671 16:32:13 -- scripts/common.sh@367 -- # return 0 00:12:35.671 16:32:13 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:35.671 16:32:13 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:35.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.671 --rc genhtml_branch_coverage=1 00:12:35.671 --rc genhtml_function_coverage=1 00:12:35.671 --rc genhtml_legend=1 00:12:35.671 --rc geninfo_all_blocks=1 00:12:35.671 --rc geninfo_unexecuted_blocks=1 00:12:35.671 00:12:35.671 ' 00:12:35.671 16:32:13 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:35.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.671 --rc genhtml_branch_coverage=1 00:12:35.671 --rc genhtml_function_coverage=1 00:12:35.671 --rc genhtml_legend=1 00:12:35.671 --rc geninfo_all_blocks=1 00:12:35.671 --rc geninfo_unexecuted_blocks=1 00:12:35.671 00:12:35.671 ' 00:12:35.671 16:32:13 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:35.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.671 --rc genhtml_branch_coverage=1 00:12:35.671 --rc genhtml_function_coverage=1 00:12:35.671 --rc genhtml_legend=1 00:12:35.671 --rc geninfo_all_blocks=1 00:12:35.671 --rc geninfo_unexecuted_blocks=1 00:12:35.671 00:12:35.671 ' 00:12:35.671 16:32:13 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:35.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.671 --rc genhtml_branch_coverage=1 00:12:35.671 --rc genhtml_function_coverage=1 00:12:35.671 --rc genhtml_legend=1 00:12:35.671 --rc geninfo_all_blocks=1 00:12:35.671 --rc geninfo_unexecuted_blocks=1 00:12:35.671 00:12:35.671 ' 00:12:35.671 16:32:13 -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:35.671 16:32:13 -- nvmf/common.sh@7 -- # uname -s 00:12:35.671 16:32:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:35.671 16:32:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:35.671 16:32:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:35.671 16:32:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:35.671 16:32:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:35.671 16:32:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:35.671 16:32:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:35.671 16:32:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:35.671 16:32:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:35.671 16:32:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:35.671 16:32:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:12:35.671 16:32:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:12:35.671 16:32:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:35.671 16:32:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:35.671 16:32:13 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:35.671 16:32:13 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:35.671 16:32:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:35.671 16:32:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:35.671 16:32:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:35.671 16:32:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.671 16:32:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.671 16:32:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.671 16:32:13 -- paths/export.sh@5 -- # export PATH 00:12:35.671 16:32:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.671 16:32:13 -- nvmf/common.sh@46 -- # : 0 00:12:35.671 16:32:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:35.671 16:32:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:35.671 16:32:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:35.671 16:32:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:35.671 16:32:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:35.671 16:32:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:35.671 16:32:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:35.671 16:32:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:35.671 16:32:13 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:35.671 16:32:13 -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:12:35.671 16:32:13 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:35.671 16:32:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:35.671 16:32:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:35.671 16:32:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:35.671 16:32:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:35.671 16:32:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:35.671 16:32:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:35.671 16:32:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:35.671 16:32:13 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:35.671 16:32:13 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:35.671 16:32:13 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:35.671 16:32:13 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:35.671 16:32:13 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:35.671 16:32:13 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:35.671 16:32:13 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:35.671 16:32:13 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:35.671 16:32:13 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:35.671 16:32:13 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:35.671 16:32:13 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:35.671 16:32:13 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:35.671 16:32:13 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:35.671 16:32:13 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:35.671 16:32:13 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:35.671 16:32:13 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:35.671 16:32:13 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:35.671 16:32:13 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:35.671 16:32:13 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:35.671 16:32:13 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:35.672 Cannot find device "nvmf_tgt_br" 00:12:35.672 16:32:13 -- nvmf/common.sh@154 -- # true 00:12:35.672 16:32:13 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:35.672 Cannot find device "nvmf_tgt_br2" 00:12:35.672 16:32:13 -- nvmf/common.sh@155 -- # true 00:12:35.672 16:32:13 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:35.672 16:32:13 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:35.672 Cannot find device "nvmf_tgt_br" 00:12:35.672 16:32:13 -- nvmf/common.sh@157 -- # true 00:12:35.672 16:32:13 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:35.672 Cannot find device "nvmf_tgt_br2" 00:12:35.672 16:32:13 -- nvmf/common.sh@158 -- # true 00:12:35.672 16:32:13 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:35.672 16:32:13 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:35.931 16:32:13 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:35.931 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:35.931 16:32:13 -- nvmf/common.sh@161 -- # true 00:12:35.931 16:32:13 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:35.931 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:35.931 16:32:13 -- nvmf/common.sh@162 -- # true 00:12:35.931 16:32:13 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:35.931 16:32:13 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:35.931 16:32:13 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:35.931 16:32:13 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:35.931 16:32:13 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:35.931 16:32:13 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:35.931 16:32:13 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:35.931 16:32:13 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:35.931 16:32:13 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:35.931 16:32:13 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:35.931 16:32:13 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:35.931 16:32:13 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:35.931 16:32:13 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:35.931 16:32:13 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:35.931 16:32:13 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:35.931 16:32:13 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:35.931 16:32:13 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:35.931 16:32:13 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:35.931 16:32:13 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:35.931 16:32:13 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:35.931 16:32:13 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:35.931 16:32:13 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:35.931 16:32:13 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:35.931 16:32:13 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:35.931 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:35.931 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:12:35.931 00:12:35.931 --- 10.0.0.2 ping statistics --- 00:12:35.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:35.931 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:12:35.931 16:32:13 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:35.931 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:35.931 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:12:35.931 00:12:35.931 --- 10.0.0.3 ping statistics --- 00:12:35.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:35.931 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:12:35.931 16:32:13 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:35.931 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:35.931 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:12:35.931 00:12:35.931 --- 10.0.0.1 ping statistics --- 00:12:35.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:35.931 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:12:35.931 16:32:13 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:35.931 16:32:13 -- nvmf/common.sh@421 -- # return 0 00:12:35.931 16:32:13 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:35.931 16:32:13 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:35.931 16:32:13 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:35.931 16:32:13 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:35.931 16:32:13 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:35.931 16:32:13 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:35.931 16:32:13 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:35.931 16:32:13 -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:12:35.931 16:32:13 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:35.931 16:32:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:35.931 16:32:13 -- common/autotest_common.sh@10 -- # set +x 00:12:35.931 16:32:13 -- nvmf/common.sh@469 -- # nvmfpid=79395 00:12:35.931 16:32:13 -- nvmf/common.sh@470 -- # waitforlisten 79395 00:12:35.931 16:32:13 -- common/autotest_common.sh@829 -- # '[' -z 79395 ']' 00:12:35.931 16:32:13 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:35.931 16:32:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:35.931 16:32:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:35.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:35.931 16:32:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:35.931 16:32:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:35.931 16:32:13 -- common/autotest_common.sh@10 -- # set +x 00:12:35.931 [2024-11-16 16:32:13.401058] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:35.931 [2024-11-16 16:32:13.401147] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:36.190 [2024-11-16 16:32:13.537954] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:36.190 [2024-11-16 16:32:13.610107] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:36.190 [2024-11-16 16:32:13.610296] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:36.190 [2024-11-16 16:32:13.610314] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:36.190 [2024-11-16 16:32:13.610326] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:36.190 [2024-11-16 16:32:13.610488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:36.190 [2024-11-16 16:32:13.610648] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:36.190 [2024-11-16 16:32:13.610658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:37.126 16:32:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:37.126 16:32:14 -- common/autotest_common.sh@862 -- # return 0 00:12:37.126 16:32:14 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:37.126 16:32:14 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:37.126 16:32:14 -- common/autotest_common.sh@10 -- # set +x 00:12:37.126 16:32:14 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:37.126 16:32:14 -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:12:37.126 16:32:14 -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:37.384 [2024-11-16 16:32:14.748227] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:37.384 16:32:14 -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:37.643 16:32:14 -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:37.902 [2024-11-16 16:32:15.242990] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:37.902 16:32:15 -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:38.160 16:32:15 -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:12:38.419 Malloc0 00:12:38.419 16:32:15 -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:38.677 Delay0 00:12:38.677 16:32:16 -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:38.936 16:32:16 -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:12:39.194 NULL1 00:12:39.194 16:32:16 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:39.452 16:32:16 -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=79530 00:12:39.452 16:32:16 -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:12:39.452 16:32:16 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79530 00:12:39.452 16:32:16 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:40.829 Read completed with error (sct=0, sc=11) 00:12:40.829 16:32:17 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:40.829 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:40.829 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:40.829 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:40.829 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:40.829 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:40.829 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:40.829 16:32:18 -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:12:40.829 16:32:18 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:12:41.087 true 00:12:41.087 16:32:18 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79530 00:12:41.087 16:32:18 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:42.023 16:32:19 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:42.023 16:32:19 -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:12:42.023 16:32:19 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:12:42.281 true 00:12:42.281 16:32:19 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79530 00:12:42.281 16:32:19 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:42.539 16:32:19 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:42.798 16:32:20 -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:12:42.798 16:32:20 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:12:43.056 true 00:12:43.056 16:32:20 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79530 00:12:43.056 16:32:20 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:43.994 16:32:21 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:43.994 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:44.252 16:32:21 -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:12:44.252 16:32:21 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:12:44.252 true 00:12:44.252 16:32:21 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79530 00:12:44.252 16:32:21 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:44.510 16:32:21 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:44.769 16:32:22 -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:12:44.769 16:32:22 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:12:45.028 true 00:12:45.028 16:32:22 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79530 00:12:45.028 16:32:22 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:45.962 16:32:23 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:46.220 16:32:23 -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:12:46.220 16:32:23 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:12:46.220 true 00:12:46.479 16:32:23 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79530 00:12:46.479 16:32:23 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:46.737 16:32:23 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:46.737 16:32:24 -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:12:46.737 16:32:24 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:12:46.996 true 00:12:46.996 16:32:24 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79530 00:12:46.996 16:32:24 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:47.931 16:32:25 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:48.190 16:32:25 -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:12:48.190 16:32:25 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:12:48.448 true 00:12:48.448 16:32:25 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79530 00:12:48.448 16:32:25 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:48.448 16:32:25 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:48.719 16:32:26 -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:12:48.719 16:32:26 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:12:48.994 true 00:12:48.994 16:32:26 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79530 00:12:48.994 16:32:26 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:49.968 16:32:27 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:50.226 16:32:27 -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:12:50.226 16:32:27 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:12:50.226 true 00:12:50.226 16:32:27 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79530 00:12:50.226 16:32:27 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:50.485 16:32:27 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:50.745 16:32:28 -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:12:50.745 16:32:28 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:12:51.004 true 00:12:51.004 16:32:28 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79530 00:12:51.004 16:32:28 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:51.938 16:32:29 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:52.196 16:32:29 -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:12:52.196 16:32:29 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:12:52.456 true 00:12:52.456 16:32:29 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79530 00:12:52.456 16:32:29 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:52.714 16:32:29 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:52.714 16:32:30 -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:12:52.714 16:32:30 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:12:52.974 true 00:12:52.974 16:32:30 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79530 00:12:52.974 16:32:30 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:53.910 16:32:31 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:54.169 16:32:31 -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:12:54.169 16:32:31 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:12:54.427 true 00:12:54.427 16:32:31 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79530 00:12:54.427 16:32:31 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:54.686 16:32:31 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:54.945 16:32:32 -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:12:54.945 16:32:32 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:12:55.204 true 00:12:55.204 16:32:32 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79530 00:12:55.204 16:32:32 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:56.139 16:32:33 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:56.139 16:32:33 -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:12:56.139 16:32:33 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:12:56.396 true 00:12:56.396 16:32:33 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79530 00:12:56.396 16:32:33 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:56.655 16:32:33 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:56.913 16:32:34 -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:12:56.913 16:32:34 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:12:57.171 true 00:12:57.171 16:32:34 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79530 00:12:57.171 16:32:34 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:58.106 16:32:35 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:58.106 16:32:35 -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:12:58.106 16:32:35 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:12:58.364 true 00:12:58.364 16:32:35 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79530 00:12:58.364 16:32:35 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:58.622 16:32:35 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:58.880 16:32:36 -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:12:58.880 16:32:36 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:12:59.138 true 00:12:59.138 16:32:36 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79530 00:12:59.138 16:32:36 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:00.073 16:32:37 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:00.073 16:32:37 -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:13:00.073 16:32:37 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:13:00.331 true 00:13:00.331 16:32:37 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79530 00:13:00.331 16:32:37 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:00.590 16:32:38 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:00.848 16:32:38 -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:13:00.848 16:32:38 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:13:01.106 true 00:13:01.106 16:32:38 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79530 00:13:01.106 16:32:38 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:02.041 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:02.042 16:32:39 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:02.300 16:32:39 -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:13:02.300 16:32:39 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:13:02.559 true 00:13:02.559 16:32:39 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79530 00:13:02.559 16:32:39 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:02.817 16:32:40 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:03.075 16:32:40 -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:13:03.075 16:32:40 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:13:03.334 true 00:13:03.334 16:32:40 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79530 00:13:03.334 16:32:40 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:03.592 16:32:40 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:03.851 16:32:41 -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:13:03.851 16:32:41 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:13:03.851 true 00:13:03.851 16:32:41 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79530 00:13:03.851 16:32:41 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:05.226 16:32:42 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:05.226 16:32:42 -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:13:05.226 16:32:42 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:13:05.484 true 00:13:05.484 16:32:42 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79530 00:13:05.484 16:32:42 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:05.742 16:32:43 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:06.000 16:32:43 -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:13:06.000 16:32:43 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:13:06.258 true 00:13:06.258 16:32:43 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79530 00:13:06.258 16:32:43 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:07.192 16:32:44 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:07.192 16:32:44 -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:13:07.192 16:32:44 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:13:07.450 true 00:13:07.450 16:32:44 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79530 00:13:07.450 16:32:44 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:07.708 16:32:45 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:07.966 16:32:45 -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:13:07.966 16:32:45 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:13:08.224 true 00:13:08.224 16:32:45 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79530 00:13:08.224 16:32:45 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:09.157 16:32:46 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:09.157 16:32:46 -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:13:09.157 16:32:46 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:13:09.415 true 00:13:09.415 16:32:46 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79530 00:13:09.415 16:32:46 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:09.673 Initializing NVMe Controllers 00:13:09.673 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:09.673 Controller IO queue size 128, less than required. 00:13:09.673 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:09.673 Controller IO queue size 128, less than required. 00:13:09.673 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:09.673 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:09.673 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:09.673 Initialization complete. Launching workers. 00:13:09.673 ======================================================== 00:13:09.673 Latency(us) 00:13:09.673 Device Information : IOPS MiB/s Average min max 00:13:09.673 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 348.43 0.17 197212.46 3005.15 1116667.27 00:13:09.673 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 14048.01 6.86 9111.39 2641.78 531758.96 00:13:09.673 ======================================================== 00:13:09.673 Total : 14396.43 7.03 13663.87 2641.78 1116667.27 00:13:09.673 00:13:09.673 16:32:47 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:09.932 16:32:47 -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:13:09.932 16:32:47 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:13:10.190 true 00:13:10.190 16:32:47 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79530 00:13:10.190 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (79530) - No such process 00:13:10.190 16:32:47 -- target/ns_hotplug_stress.sh@53 -- # wait 79530 00:13:10.190 16:32:47 -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:10.448 16:32:47 -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:10.707 16:32:48 -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:13:10.707 16:32:48 -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:13:10.707 16:32:48 -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:13:10.707 16:32:48 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:10.707 16:32:48 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:13:10.965 null0 00:13:10.965 16:32:48 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:10.965 16:32:48 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:10.965 16:32:48 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:13:11.223 null1 00:13:11.223 16:32:48 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:11.223 16:32:48 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:11.223 16:32:48 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:13:11.481 null2 00:13:11.481 16:32:48 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:11.481 16:32:48 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:11.481 16:32:48 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:13:11.740 null3 00:13:11.740 16:32:49 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:11.740 16:32:49 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:11.740 16:32:49 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:13:11.998 null4 00:13:11.998 16:32:49 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:11.998 16:32:49 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:11.998 16:32:49 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:13:12.256 null5 00:13:12.256 16:32:49 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:12.256 16:32:49 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:12.256 16:32:49 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:13:12.256 null6 00:13:12.256 16:32:49 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:12.256 16:32:49 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:12.256 16:32:49 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:13:12.515 null7 00:13:12.515 16:32:49 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:12.515 16:32:49 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:12.515 16:32:49 -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:13:12.515 16:32:49 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:12.515 16:32:49 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:12.515 16:32:49 -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:13:12.515 16:32:49 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:12.515 16:32:49 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:12.515 16:32:49 -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:13:12.515 16:32:49 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:12.515 16:32:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.515 16:32:49 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:12.515 16:32:49 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:12.515 16:32:49 -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:13:12.515 16:32:49 -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:13:12.515 16:32:49 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:12.515 16:32:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.515 16:32:49 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:12.515 16:32:49 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:12.515 16:32:49 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:12.515 16:32:49 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:12.515 16:32:49 -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:13:12.515 16:32:49 -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:13:12.515 16:32:49 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:12.516 16:32:49 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:12.516 16:32:49 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:12.516 16:32:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.516 16:32:49 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:12.516 16:32:49 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:12.516 16:32:49 -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:13:12.516 16:32:49 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:12.516 16:32:49 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:12.516 16:32:49 -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:13:12.516 16:32:49 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:12.516 16:32:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.516 16:32:49 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:12.516 16:32:49 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:12.516 16:32:49 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:12.516 16:32:49 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:12.516 16:32:49 -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:13:12.516 16:32:49 -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:13:12.516 16:32:49 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:12.516 16:32:49 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:12.516 16:32:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.516 16:32:49 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:12.516 16:32:49 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:12.516 16:32:49 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:12.516 16:32:49 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:12.516 16:32:49 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:12.516 16:32:49 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:12.516 16:32:49 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:12.516 16:32:49 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:12.516 16:32:49 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:12.516 16:32:49 -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:13:12.516 16:32:49 -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:13:12.516 16:32:49 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:12.516 16:32:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.516 16:32:49 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:12.516 16:32:49 -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:13:12.516 16:32:49 -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:13:12.516 16:32:49 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:12.516 16:32:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.516 16:32:49 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:12.516 16:32:49 -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:13:12.516 16:32:49 -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:13:12.516 16:32:49 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:12.516 16:32:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.516 16:32:49 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:12.516 16:32:49 -- target/ns_hotplug_stress.sh@66 -- # wait 80585 80586 80589 80590 80591 80594 80596 80597 00:13:12.774 16:32:50 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:12.774 16:32:50 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:12.774 16:32:50 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:12.774 16:32:50 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:12.775 16:32:50 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:13.033 16:32:50 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:13.033 16:32:50 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:13.033 16:32:50 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:13.033 16:32:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.033 16:32:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.033 16:32:50 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:13.033 16:32:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.033 16:32:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.033 16:32:50 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:13.033 16:32:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.033 16:32:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.033 16:32:50 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:13.033 16:32:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.033 16:32:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.033 16:32:50 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:13.033 16:32:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.033 16:32:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.033 16:32:50 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:13.292 16:32:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.292 16:32:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.292 16:32:50 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:13.292 16:32:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.292 16:32:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.292 16:32:50 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:13.292 16:32:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.292 16:32:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.292 16:32:50 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:13.292 16:32:50 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:13.292 16:32:50 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:13.292 16:32:50 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:13.292 16:32:50 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:13.292 16:32:50 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:13.552 16:32:50 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:13.552 16:32:50 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:13.552 16:32:50 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:13.552 16:32:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.552 16:32:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.552 16:32:50 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:13.552 16:32:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.552 16:32:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.552 16:32:50 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:13.552 16:32:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.552 16:32:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.552 16:32:50 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:13.552 16:32:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.552 16:32:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.552 16:32:50 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:13.552 16:32:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.552 16:32:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.552 16:32:51 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:13.810 16:32:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.810 16:32:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.810 16:32:51 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:13.810 16:32:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.810 16:32:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.810 16:32:51 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:13.810 16:32:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:13.810 16:32:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:13.810 16:32:51 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:13.810 16:32:51 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:13.810 16:32:51 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:13.810 16:32:51 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:13.810 16:32:51 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:14.068 16:32:51 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:14.068 16:32:51 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:14.068 16:32:51 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:14.068 16:32:51 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:14.068 16:32:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.068 16:32:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.068 16:32:51 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:14.068 16:32:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.068 16:32:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.068 16:32:51 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:14.068 16:32:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.068 16:32:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.068 16:32:51 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:14.068 16:32:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.068 16:32:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.068 16:32:51 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:14.327 16:32:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.327 16:32:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.327 16:32:51 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:14.327 16:32:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.328 16:32:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.328 16:32:51 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:14.328 16:32:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.328 16:32:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.328 16:32:51 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:14.328 16:32:51 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:14.328 16:32:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.328 16:32:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.328 16:32:51 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:14.328 16:32:51 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:14.328 16:32:51 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:14.328 16:32:51 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:14.586 16:32:51 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:14.586 16:32:51 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:14.586 16:32:51 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:14.586 16:32:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.586 16:32:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.586 16:32:51 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:14.586 16:32:51 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:14.586 16:32:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.586 16:32:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.586 16:32:51 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:14.586 16:32:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.586 16:32:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.586 16:32:52 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:14.586 16:32:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.586 16:32:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.586 16:32:52 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:14.586 16:32:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.586 16:32:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.586 16:32:52 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:14.845 16:32:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.845 16:32:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.845 16:32:52 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:14.845 16:32:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.845 16:32:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.845 16:32:52 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:14.845 16:32:52 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:14.845 16:32:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:14.845 16:32:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:14.845 16:32:52 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:14.845 16:32:52 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:14.845 16:32:52 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:14.845 16:32:52 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:14.845 16:32:52 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:15.103 16:32:52 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:15.103 16:32:52 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:15.103 16:32:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.103 16:32:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.103 16:32:52 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:15.103 16:32:52 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:15.103 16:32:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.103 16:32:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.103 16:32:52 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:15.103 16:32:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.103 16:32:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.103 16:32:52 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:15.103 16:32:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.103 16:32:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.103 16:32:52 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:15.361 16:32:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.361 16:32:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.361 16:32:52 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:15.361 16:32:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.361 16:32:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.361 16:32:52 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:15.361 16:32:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.361 16:32:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.361 16:32:52 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:15.361 16:32:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.361 16:32:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.361 16:32:52 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:15.361 16:32:52 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:15.361 16:32:52 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:15.361 16:32:52 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:15.361 16:32:52 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:15.361 16:32:52 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:15.620 16:32:52 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:15.620 16:32:52 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:15.620 16:32:52 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:15.620 16:32:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.620 16:32:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.620 16:32:52 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:15.620 16:32:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.620 16:32:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.620 16:32:52 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:15.620 16:32:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.620 16:32:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.620 16:32:53 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:15.620 16:32:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.620 16:32:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.620 16:32:53 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:15.620 16:32:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.620 16:32:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.620 16:32:53 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:15.882 16:32:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.882 16:32:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.882 16:32:53 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:15.882 16:32:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.882 16:32:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.882 16:32:53 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:15.882 16:32:53 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:15.882 16:32:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:15.882 16:32:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:15.882 16:32:53 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:15.882 16:32:53 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:15.882 16:32:53 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:15.882 16:32:53 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:15.882 16:32:53 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:16.154 16:32:53 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:16.154 16:32:53 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:16.154 16:32:53 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:16.154 16:32:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.154 16:32:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.154 16:32:53 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:16.154 16:32:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.154 16:32:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.154 16:32:53 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:16.154 16:32:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.154 16:32:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.154 16:32:53 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:16.154 16:32:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.154 16:32:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.154 16:32:53 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:16.154 16:32:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.154 16:32:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.154 16:32:53 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:16.427 16:32:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.427 16:32:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.427 16:32:53 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:16.427 16:32:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.427 16:32:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.427 16:32:53 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:16.427 16:32:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.427 16:32:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.427 16:32:53 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:16.427 16:32:53 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:16.427 16:32:53 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:16.427 16:32:53 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:16.427 16:32:53 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:16.427 16:32:53 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:16.685 16:32:53 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:16.685 16:32:53 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:16.685 16:32:53 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:16.685 16:32:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.685 16:32:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.685 16:32:53 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:16.685 16:32:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.685 16:32:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.685 16:32:54 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:16.685 16:32:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.685 16:32:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.685 16:32:54 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:16.685 16:32:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.685 16:32:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.685 16:32:54 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:16.685 16:32:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.685 16:32:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.685 16:32:54 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:16.685 16:32:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.685 16:32:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.685 16:32:54 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:16.945 16:32:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.945 16:32:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.945 16:32:54 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:16.945 16:32:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:16.945 16:32:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:16.945 16:32:54 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:16.945 16:32:54 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:16.945 16:32:54 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:16.945 16:32:54 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:16.945 16:32:54 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:16.945 16:32:54 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:16.945 16:32:54 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:17.203 16:32:54 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:17.203 16:32:54 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:17.203 16:32:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.203 16:32:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.203 16:32:54 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:17.203 16:32:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.203 16:32:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.203 16:32:54 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:17.204 16:32:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.204 16:32:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.204 16:32:54 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:17.204 16:32:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.204 16:32:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.204 16:32:54 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:17.204 16:32:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.204 16:32:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.204 16:32:54 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:17.204 16:32:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.204 16:32:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.204 16:32:54 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:17.462 16:32:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.462 16:32:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.462 16:32:54 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:17.462 16:32:54 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:17.462 16:32:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.462 16:32:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.462 16:32:54 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:17.462 16:32:54 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:17.462 16:32:54 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:17.462 16:32:54 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:17.462 16:32:54 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:17.462 16:32:54 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:17.721 16:32:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.721 16:32:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.721 16:32:55 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:17.721 16:32:55 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:17.721 16:32:55 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.721 16:32:55 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.721 16:32:55 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.721 16:32:55 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.721 16:32:55 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.721 16:32:55 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.721 16:32:55 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.721 16:32:55 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.721 16:32:55 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.721 16:32:55 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.980 16:32:55 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.980 16:32:55 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.980 16:32:55 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:17.980 16:32:55 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:17.980 16:32:55 -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:17.980 16:32:55 -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:13:17.980 16:32:55 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:17.980 16:32:55 -- nvmf/common.sh@116 -- # sync 00:13:17.980 16:32:55 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:17.980 16:32:55 -- nvmf/common.sh@119 -- # set +e 00:13:17.980 16:32:55 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:17.980 16:32:55 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:17.980 rmmod nvme_tcp 00:13:17.980 rmmod nvme_fabrics 00:13:17.980 rmmod nvme_keyring 00:13:17.980 16:32:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:17.980 16:32:55 -- nvmf/common.sh@123 -- # set -e 00:13:17.980 16:32:55 -- nvmf/common.sh@124 -- # return 0 00:13:17.980 16:32:55 -- nvmf/common.sh@477 -- # '[' -n 79395 ']' 00:13:17.980 16:32:55 -- nvmf/common.sh@478 -- # killprocess 79395 00:13:17.980 16:32:55 -- common/autotest_common.sh@936 -- # '[' -z 79395 ']' 00:13:17.980 16:32:55 -- common/autotest_common.sh@940 -- # kill -0 79395 00:13:17.980 16:32:55 -- common/autotest_common.sh@941 -- # uname 00:13:17.980 16:32:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:17.980 16:32:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79395 00:13:17.980 16:32:55 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:17.980 16:32:55 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:17.980 killing process with pid 79395 00:13:17.980 16:32:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79395' 00:13:17.980 16:32:55 -- common/autotest_common.sh@955 -- # kill 79395 00:13:17.980 16:32:55 -- common/autotest_common.sh@960 -- # wait 79395 00:13:18.239 16:32:55 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:18.239 16:32:55 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:18.239 16:32:55 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:18.239 16:32:55 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:18.239 16:32:55 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:18.239 16:32:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:18.239 16:32:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:18.239 16:32:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:18.239 16:32:55 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:18.239 00:13:18.239 real 0m42.747s 00:13:18.239 user 3m23.094s 00:13:18.239 sys 0m11.910s 00:13:18.239 16:32:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:18.239 ************************************ 00:13:18.239 16:32:55 -- common/autotest_common.sh@10 -- # set +x 00:13:18.239 END TEST nvmf_ns_hotplug_stress 00:13:18.239 ************************************ 00:13:18.239 16:32:55 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:18.239 16:32:55 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:18.239 16:32:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:18.239 16:32:55 -- common/autotest_common.sh@10 -- # set +x 00:13:18.239 ************************************ 00:13:18.239 START TEST nvmf_connect_stress 00:13:18.239 ************************************ 00:13:18.239 16:32:55 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:18.239 * Looking for test storage... 00:13:18.239 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:18.239 16:32:55 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:18.239 16:32:55 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:18.239 16:32:55 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:18.499 16:32:55 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:18.499 16:32:55 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:18.499 16:32:55 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:18.499 16:32:55 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:18.499 16:32:55 -- scripts/common.sh@335 -- # IFS=.-: 00:13:18.499 16:32:55 -- scripts/common.sh@335 -- # read -ra ver1 00:13:18.499 16:32:55 -- scripts/common.sh@336 -- # IFS=.-: 00:13:18.499 16:32:55 -- scripts/common.sh@336 -- # read -ra ver2 00:13:18.499 16:32:55 -- scripts/common.sh@337 -- # local 'op=<' 00:13:18.499 16:32:55 -- scripts/common.sh@339 -- # ver1_l=2 00:13:18.499 16:32:55 -- scripts/common.sh@340 -- # ver2_l=1 00:13:18.499 16:32:55 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:18.499 16:32:55 -- scripts/common.sh@343 -- # case "$op" in 00:13:18.499 16:32:55 -- scripts/common.sh@344 -- # : 1 00:13:18.499 16:32:55 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:18.499 16:32:55 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:18.499 16:32:55 -- scripts/common.sh@364 -- # decimal 1 00:13:18.499 16:32:55 -- scripts/common.sh@352 -- # local d=1 00:13:18.499 16:32:55 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:18.499 16:32:55 -- scripts/common.sh@354 -- # echo 1 00:13:18.499 16:32:55 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:18.499 16:32:55 -- scripts/common.sh@365 -- # decimal 2 00:13:18.499 16:32:55 -- scripts/common.sh@352 -- # local d=2 00:13:18.499 16:32:55 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:18.499 16:32:55 -- scripts/common.sh@354 -- # echo 2 00:13:18.499 16:32:55 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:18.499 16:32:55 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:18.499 16:32:55 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:18.499 16:32:55 -- scripts/common.sh@367 -- # return 0 00:13:18.499 16:32:55 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:18.499 16:32:55 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:18.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.499 --rc genhtml_branch_coverage=1 00:13:18.499 --rc genhtml_function_coverage=1 00:13:18.499 --rc genhtml_legend=1 00:13:18.499 --rc geninfo_all_blocks=1 00:13:18.499 --rc geninfo_unexecuted_blocks=1 00:13:18.499 00:13:18.499 ' 00:13:18.499 16:32:55 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:18.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.499 --rc genhtml_branch_coverage=1 00:13:18.499 --rc genhtml_function_coverage=1 00:13:18.499 --rc genhtml_legend=1 00:13:18.499 --rc geninfo_all_blocks=1 00:13:18.499 --rc geninfo_unexecuted_blocks=1 00:13:18.499 00:13:18.499 ' 00:13:18.499 16:32:55 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:18.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.499 --rc genhtml_branch_coverage=1 00:13:18.499 --rc genhtml_function_coverage=1 00:13:18.499 --rc genhtml_legend=1 00:13:18.499 --rc geninfo_all_blocks=1 00:13:18.499 --rc geninfo_unexecuted_blocks=1 00:13:18.499 00:13:18.499 ' 00:13:18.499 16:32:55 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:18.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.499 --rc genhtml_branch_coverage=1 00:13:18.499 --rc genhtml_function_coverage=1 00:13:18.499 --rc genhtml_legend=1 00:13:18.499 --rc geninfo_all_blocks=1 00:13:18.499 --rc geninfo_unexecuted_blocks=1 00:13:18.499 00:13:18.499 ' 00:13:18.499 16:32:55 -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:18.499 16:32:55 -- nvmf/common.sh@7 -- # uname -s 00:13:18.499 16:32:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:18.499 16:32:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:18.499 16:32:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:18.499 16:32:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:18.499 16:32:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:18.499 16:32:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:18.499 16:32:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:18.499 16:32:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:18.499 16:32:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:18.499 16:32:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:18.499 16:32:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:13:18.499 16:32:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:13:18.499 16:32:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:18.499 16:32:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:18.499 16:32:55 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:18.499 16:32:55 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:18.499 16:32:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:18.499 16:32:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:18.499 16:32:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:18.499 16:32:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.499 16:32:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.499 16:32:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.499 16:32:55 -- paths/export.sh@5 -- # export PATH 00:13:18.499 16:32:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.499 16:32:55 -- nvmf/common.sh@46 -- # : 0 00:13:18.499 16:32:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:18.499 16:32:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:18.499 16:32:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:18.499 16:32:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:18.499 16:32:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:18.499 16:32:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:18.499 16:32:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:18.499 16:32:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:18.499 16:32:55 -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:18.499 16:32:55 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:18.499 16:32:55 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:18.499 16:32:55 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:18.499 16:32:55 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:18.499 16:32:55 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:18.499 16:32:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:18.499 16:32:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:18.499 16:32:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:18.499 16:32:55 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:18.499 16:32:55 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:18.499 16:32:55 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:18.499 16:32:55 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:18.499 16:32:55 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:18.500 16:32:55 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:18.500 16:32:55 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:18.500 16:32:55 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:18.500 16:32:55 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:18.500 16:32:55 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:18.500 16:32:55 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:18.500 16:32:55 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:18.500 16:32:55 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:18.500 16:32:55 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:18.500 16:32:55 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:18.500 16:32:55 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:18.500 16:32:55 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:18.500 16:32:55 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:18.500 16:32:55 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:18.500 16:32:55 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:18.500 Cannot find device "nvmf_tgt_br" 00:13:18.500 16:32:55 -- nvmf/common.sh@154 -- # true 00:13:18.500 16:32:55 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:18.500 Cannot find device "nvmf_tgt_br2" 00:13:18.500 16:32:55 -- nvmf/common.sh@155 -- # true 00:13:18.500 16:32:55 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:18.500 16:32:55 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:18.500 Cannot find device "nvmf_tgt_br" 00:13:18.500 16:32:55 -- nvmf/common.sh@157 -- # true 00:13:18.500 16:32:55 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:18.500 Cannot find device "nvmf_tgt_br2" 00:13:18.500 16:32:55 -- nvmf/common.sh@158 -- # true 00:13:18.500 16:32:55 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:18.500 16:32:55 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:18.500 16:32:55 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:18.500 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:18.500 16:32:55 -- nvmf/common.sh@161 -- # true 00:13:18.500 16:32:55 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:18.500 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:18.500 16:32:55 -- nvmf/common.sh@162 -- # true 00:13:18.500 16:32:55 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:18.500 16:32:55 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:18.500 16:32:55 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:18.500 16:32:55 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:18.500 16:32:55 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:18.759 16:32:55 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:18.759 16:32:56 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:18.759 16:32:56 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:18.759 16:32:56 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:18.759 16:32:56 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:18.759 16:32:56 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:18.759 16:32:56 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:18.759 16:32:56 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:18.759 16:32:56 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:18.759 16:32:56 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:18.759 16:32:56 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:18.759 16:32:56 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:18.759 16:32:56 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:18.759 16:32:56 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:18.759 16:32:56 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:18.759 16:32:56 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:18.759 16:32:56 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:18.759 16:32:56 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:18.759 16:32:56 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:18.759 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:18.759 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:13:18.759 00:13:18.759 --- 10.0.0.2 ping statistics --- 00:13:18.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:18.759 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:13:18.759 16:32:56 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:18.759 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:18.759 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:13:18.759 00:13:18.759 --- 10.0.0.3 ping statistics --- 00:13:18.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:18.759 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:13:18.759 16:32:56 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:18.759 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:18.759 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:13:18.759 00:13:18.759 --- 10.0.0.1 ping statistics --- 00:13:18.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:18.759 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:13:18.759 16:32:56 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:18.759 16:32:56 -- nvmf/common.sh@421 -- # return 0 00:13:18.759 16:32:56 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:18.759 16:32:56 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:18.759 16:32:56 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:18.759 16:32:56 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:18.759 16:32:56 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:18.759 16:32:56 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:18.759 16:32:56 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:18.759 16:32:56 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:18.759 16:32:56 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:18.759 16:32:56 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:18.759 16:32:56 -- common/autotest_common.sh@10 -- # set +x 00:13:18.759 16:32:56 -- nvmf/common.sh@469 -- # nvmfpid=81914 00:13:18.759 16:32:56 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:18.759 16:32:56 -- nvmf/common.sh@470 -- # waitforlisten 81914 00:13:18.759 16:32:56 -- common/autotest_common.sh@829 -- # '[' -z 81914 ']' 00:13:18.759 16:32:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:18.759 16:32:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:18.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:18.759 16:32:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:18.759 16:32:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:18.759 16:32:56 -- common/autotest_common.sh@10 -- # set +x 00:13:18.759 [2024-11-16 16:32:56.212089] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:18.759 [2024-11-16 16:32:56.212147] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:19.018 [2024-11-16 16:32:56.345997] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:19.018 [2024-11-16 16:32:56.407157] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:19.018 [2024-11-16 16:32:56.407280] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:19.018 [2024-11-16 16:32:56.407292] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:19.018 [2024-11-16 16:32:56.407299] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:19.018 [2024-11-16 16:32:56.407832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:19.018 [2024-11-16 16:32:56.407859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:19.018 [2024-11-16 16:32:56.408580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:19.955 16:32:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:19.955 16:32:57 -- common/autotest_common.sh@862 -- # return 0 00:13:19.955 16:32:57 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:19.955 16:32:57 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:19.956 16:32:57 -- common/autotest_common.sh@10 -- # set +x 00:13:19.956 16:32:57 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:19.956 16:32:57 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:19.956 16:32:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.956 16:32:57 -- common/autotest_common.sh@10 -- # set +x 00:13:19.956 [2024-11-16 16:32:57.220950] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:19.956 16:32:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.956 16:32:57 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:19.956 16:32:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.956 16:32:57 -- common/autotest_common.sh@10 -- # set +x 00:13:19.956 16:32:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.956 16:32:57 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:19.956 16:32:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.956 16:32:57 -- common/autotest_common.sh@10 -- # set +x 00:13:19.956 [2024-11-16 16:32:57.238806] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:19.956 16:32:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.956 16:32:57 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:19.956 16:32:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.956 16:32:57 -- common/autotest_common.sh@10 -- # set +x 00:13:19.956 NULL1 00:13:19.956 16:32:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.956 16:32:57 -- target/connect_stress.sh@21 -- # PERF_PID=81965 00:13:19.956 16:32:57 -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:19.956 16:32:57 -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:13:19.956 16:32:57 -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:13:19.956 16:32:57 -- target/connect_stress.sh@27 -- # seq 1 20 00:13:19.956 16:32:57 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:19.956 16:32:57 -- target/connect_stress.sh@28 -- # cat 00:13:19.956 16:32:57 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:19.956 16:32:57 -- target/connect_stress.sh@28 -- # cat 00:13:19.956 16:32:57 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:19.956 16:32:57 -- target/connect_stress.sh@28 -- # cat 00:13:19.956 16:32:57 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:19.956 16:32:57 -- target/connect_stress.sh@28 -- # cat 00:13:19.956 16:32:57 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:19.956 16:32:57 -- target/connect_stress.sh@28 -- # cat 00:13:19.956 16:32:57 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:19.956 16:32:57 -- target/connect_stress.sh@28 -- # cat 00:13:19.956 16:32:57 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:19.956 16:32:57 -- target/connect_stress.sh@28 -- # cat 00:13:19.956 16:32:57 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:19.956 16:32:57 -- target/connect_stress.sh@28 -- # cat 00:13:19.956 16:32:57 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:19.956 16:32:57 -- target/connect_stress.sh@28 -- # cat 00:13:19.956 16:32:57 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:19.956 16:32:57 -- target/connect_stress.sh@28 -- # cat 00:13:19.956 16:32:57 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:19.956 16:32:57 -- target/connect_stress.sh@28 -- # cat 00:13:19.956 16:32:57 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:19.956 16:32:57 -- target/connect_stress.sh@28 -- # cat 00:13:19.956 16:32:57 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:19.956 16:32:57 -- target/connect_stress.sh@28 -- # cat 00:13:19.956 16:32:57 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:19.956 16:32:57 -- target/connect_stress.sh@28 -- # cat 00:13:19.956 16:32:57 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:19.956 16:32:57 -- target/connect_stress.sh@28 -- # cat 00:13:19.956 16:32:57 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:19.956 16:32:57 -- target/connect_stress.sh@28 -- # cat 00:13:19.956 16:32:57 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:19.956 16:32:57 -- target/connect_stress.sh@28 -- # cat 00:13:19.956 16:32:57 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:19.956 16:32:57 -- target/connect_stress.sh@28 -- # cat 00:13:19.956 16:32:57 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:19.956 16:32:57 -- target/connect_stress.sh@28 -- # cat 00:13:19.956 16:32:57 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:19.956 16:32:57 -- target/connect_stress.sh@28 -- # cat 00:13:19.956 16:32:57 -- target/connect_stress.sh@34 -- # kill -0 81965 00:13:19.956 16:32:57 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:19.956 16:32:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.956 16:32:57 -- common/autotest_common.sh@10 -- # set +x 00:13:20.215 16:32:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.215 16:32:57 -- target/connect_stress.sh@34 -- # kill -0 81965 00:13:20.215 16:32:57 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:20.215 16:32:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.215 16:32:57 -- common/autotest_common.sh@10 -- # set +x 00:13:20.783 16:32:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.783 16:32:57 -- target/connect_stress.sh@34 -- # kill -0 81965 00:13:20.783 16:32:57 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:20.783 16:32:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.783 16:32:57 -- common/autotest_common.sh@10 -- # set +x 00:13:21.042 16:32:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.042 16:32:58 -- target/connect_stress.sh@34 -- # kill -0 81965 00:13:21.042 16:32:58 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:21.042 16:32:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.042 16:32:58 -- common/autotest_common.sh@10 -- # set +x 00:13:21.301 16:32:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.301 16:32:58 -- target/connect_stress.sh@34 -- # kill -0 81965 00:13:21.301 16:32:58 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:21.301 16:32:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.301 16:32:58 -- common/autotest_common.sh@10 -- # set +x 00:13:21.560 16:32:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.560 16:32:58 -- target/connect_stress.sh@34 -- # kill -0 81965 00:13:21.560 16:32:58 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:21.560 16:32:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.560 16:32:58 -- common/autotest_common.sh@10 -- # set +x 00:13:21.819 16:32:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.819 16:32:59 -- target/connect_stress.sh@34 -- # kill -0 81965 00:13:21.819 16:32:59 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:21.819 16:32:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.819 16:32:59 -- common/autotest_common.sh@10 -- # set +x 00:13:22.387 16:32:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.387 16:32:59 -- target/connect_stress.sh@34 -- # kill -0 81965 00:13:22.387 16:32:59 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:22.387 16:32:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.387 16:32:59 -- common/autotest_common.sh@10 -- # set +x 00:13:22.645 16:32:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.645 16:32:59 -- target/connect_stress.sh@34 -- # kill -0 81965 00:13:22.645 16:32:59 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:22.645 16:32:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.645 16:32:59 -- common/autotest_common.sh@10 -- # set +x 00:13:22.904 16:33:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.904 16:33:00 -- target/connect_stress.sh@34 -- # kill -0 81965 00:13:22.904 16:33:00 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:22.904 16:33:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.904 16:33:00 -- common/autotest_common.sh@10 -- # set +x 00:13:23.164 16:33:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.164 16:33:00 -- target/connect_stress.sh@34 -- # kill -0 81965 00:13:23.164 16:33:00 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:23.164 16:33:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.164 16:33:00 -- common/autotest_common.sh@10 -- # set +x 00:13:23.423 16:33:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.423 16:33:00 -- target/connect_stress.sh@34 -- # kill -0 81965 00:13:23.423 16:33:00 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:23.423 16:33:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.423 16:33:00 -- common/autotest_common.sh@10 -- # set +x 00:13:23.990 16:33:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.990 16:33:01 -- target/connect_stress.sh@34 -- # kill -0 81965 00:13:23.990 16:33:01 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:23.990 16:33:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.990 16:33:01 -- common/autotest_common.sh@10 -- # set +x 00:13:24.249 16:33:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.249 16:33:01 -- target/connect_stress.sh@34 -- # kill -0 81965 00:13:24.249 16:33:01 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.249 16:33:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.249 16:33:01 -- common/autotest_common.sh@10 -- # set +x 00:13:24.508 16:33:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.508 16:33:01 -- target/connect_stress.sh@34 -- # kill -0 81965 00:13:24.508 16:33:01 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.508 16:33:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.508 16:33:01 -- common/autotest_common.sh@10 -- # set +x 00:13:24.767 16:33:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.767 16:33:02 -- target/connect_stress.sh@34 -- # kill -0 81965 00:13:24.767 16:33:02 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.767 16:33:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.767 16:33:02 -- common/autotest_common.sh@10 -- # set +x 00:13:25.026 16:33:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.026 16:33:02 -- target/connect_stress.sh@34 -- # kill -0 81965 00:13:25.026 16:33:02 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:25.026 16:33:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.026 16:33:02 -- common/autotest_common.sh@10 -- # set +x 00:13:25.593 16:33:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.593 16:33:02 -- target/connect_stress.sh@34 -- # kill -0 81965 00:13:25.593 16:33:02 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:25.593 16:33:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.593 16:33:02 -- common/autotest_common.sh@10 -- # set +x 00:13:25.852 16:33:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.852 16:33:03 -- target/connect_stress.sh@34 -- # kill -0 81965 00:13:25.852 16:33:03 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:25.852 16:33:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.852 16:33:03 -- common/autotest_common.sh@10 -- # set +x 00:13:26.111 16:33:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.111 16:33:03 -- target/connect_stress.sh@34 -- # kill -0 81965 00:13:26.111 16:33:03 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.111 16:33:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.111 16:33:03 -- common/autotest_common.sh@10 -- # set +x 00:13:26.370 16:33:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.370 16:33:03 -- target/connect_stress.sh@34 -- # kill -0 81965 00:13:26.370 16:33:03 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.370 16:33:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.370 16:33:03 -- common/autotest_common.sh@10 -- # set +x 00:13:26.937 16:33:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.937 16:33:04 -- target/connect_stress.sh@34 -- # kill -0 81965 00:13:26.937 16:33:04 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.937 16:33:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.937 16:33:04 -- common/autotest_common.sh@10 -- # set +x 00:13:27.195 16:33:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.195 16:33:04 -- target/connect_stress.sh@34 -- # kill -0 81965 00:13:27.195 16:33:04 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.195 16:33:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.195 16:33:04 -- common/autotest_common.sh@10 -- # set +x 00:13:27.454 16:33:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.454 16:33:04 -- target/connect_stress.sh@34 -- # kill -0 81965 00:13:27.454 16:33:04 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.454 16:33:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.454 16:33:04 -- common/autotest_common.sh@10 -- # set +x 00:13:27.712 16:33:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.712 16:33:05 -- target/connect_stress.sh@34 -- # kill -0 81965 00:13:27.712 16:33:05 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.712 16:33:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.712 16:33:05 -- common/autotest_common.sh@10 -- # set +x 00:13:27.970 16:33:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.970 16:33:05 -- target/connect_stress.sh@34 -- # kill -0 81965 00:13:27.970 16:33:05 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.970 16:33:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.970 16:33:05 -- common/autotest_common.sh@10 -- # set +x 00:13:28.537 16:33:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.537 16:33:05 -- target/connect_stress.sh@34 -- # kill -0 81965 00:13:28.537 16:33:05 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:28.537 16:33:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.537 16:33:05 -- common/autotest_common.sh@10 -- # set +x 00:13:28.796 16:33:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.796 16:33:06 -- target/connect_stress.sh@34 -- # kill -0 81965 00:13:28.796 16:33:06 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:28.796 16:33:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.796 16:33:06 -- common/autotest_common.sh@10 -- # set +x 00:13:29.054 16:33:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.054 16:33:06 -- target/connect_stress.sh@34 -- # kill -0 81965 00:13:29.054 16:33:06 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.054 16:33:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.054 16:33:06 -- common/autotest_common.sh@10 -- # set +x 00:13:29.312 16:33:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.312 16:33:06 -- target/connect_stress.sh@34 -- # kill -0 81965 00:13:29.312 16:33:06 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.312 16:33:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.312 16:33:06 -- common/autotest_common.sh@10 -- # set +x 00:13:29.571 16:33:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.571 16:33:07 -- target/connect_stress.sh@34 -- # kill -0 81965 00:13:29.571 16:33:07 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.571 16:33:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.571 16:33:07 -- common/autotest_common.sh@10 -- # set +x 00:13:30.138 16:33:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.138 16:33:07 -- target/connect_stress.sh@34 -- # kill -0 81965 00:13:30.138 16:33:07 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:30.138 16:33:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.138 16:33:07 -- common/autotest_common.sh@10 -- # set +x 00:13:30.138 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:30.398 16:33:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.398 16:33:07 -- target/connect_stress.sh@34 -- # kill -0 81965 00:13:30.398 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (81965) - No such process 00:13:30.398 16:33:07 -- target/connect_stress.sh@38 -- # wait 81965 00:13:30.398 16:33:07 -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:13:30.398 16:33:07 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:30.398 16:33:07 -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:30.398 16:33:07 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:30.398 16:33:07 -- nvmf/common.sh@116 -- # sync 00:13:30.398 16:33:07 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:30.398 16:33:07 -- nvmf/common.sh@119 -- # set +e 00:13:30.398 16:33:07 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:30.398 16:33:07 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:30.398 rmmod nvme_tcp 00:13:30.398 rmmod nvme_fabrics 00:13:30.398 rmmod nvme_keyring 00:13:30.398 16:33:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:30.398 16:33:07 -- nvmf/common.sh@123 -- # set -e 00:13:30.398 16:33:07 -- nvmf/common.sh@124 -- # return 0 00:13:30.398 16:33:07 -- nvmf/common.sh@477 -- # '[' -n 81914 ']' 00:13:30.398 16:33:07 -- nvmf/common.sh@478 -- # killprocess 81914 00:13:30.398 16:33:07 -- common/autotest_common.sh@936 -- # '[' -z 81914 ']' 00:13:30.398 16:33:07 -- common/autotest_common.sh@940 -- # kill -0 81914 00:13:30.398 16:33:07 -- common/autotest_common.sh@941 -- # uname 00:13:30.398 16:33:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:30.398 16:33:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81914 00:13:30.398 killing process with pid 81914 00:13:30.398 16:33:07 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:30.398 16:33:07 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:30.398 16:33:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81914' 00:13:30.398 16:33:07 -- common/autotest_common.sh@955 -- # kill 81914 00:13:30.399 16:33:07 -- common/autotest_common.sh@960 -- # wait 81914 00:13:30.657 16:33:07 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:30.657 16:33:07 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:30.657 16:33:07 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:30.657 16:33:07 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:30.657 16:33:07 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:30.657 16:33:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:30.657 16:33:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:30.657 16:33:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:30.657 16:33:08 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:30.657 ************************************ 00:13:30.657 END TEST nvmf_connect_stress 00:13:30.657 ************************************ 00:13:30.657 00:13:30.657 real 0m12.388s 00:13:30.657 user 0m41.621s 00:13:30.657 sys 0m2.909s 00:13:30.657 16:33:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:30.657 16:33:08 -- common/autotest_common.sh@10 -- # set +x 00:13:30.657 16:33:08 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:30.657 16:33:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:30.657 16:33:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:30.657 16:33:08 -- common/autotest_common.sh@10 -- # set +x 00:13:30.657 ************************************ 00:13:30.657 START TEST nvmf_fused_ordering 00:13:30.657 ************************************ 00:13:30.657 16:33:08 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:30.916 * Looking for test storage... 00:13:30.916 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:30.916 16:33:08 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:30.916 16:33:08 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:30.916 16:33:08 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:30.916 16:33:08 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:30.916 16:33:08 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:30.916 16:33:08 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:30.916 16:33:08 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:30.916 16:33:08 -- scripts/common.sh@335 -- # IFS=.-: 00:13:30.916 16:33:08 -- scripts/common.sh@335 -- # read -ra ver1 00:13:30.916 16:33:08 -- scripts/common.sh@336 -- # IFS=.-: 00:13:30.916 16:33:08 -- scripts/common.sh@336 -- # read -ra ver2 00:13:30.916 16:33:08 -- scripts/common.sh@337 -- # local 'op=<' 00:13:30.916 16:33:08 -- scripts/common.sh@339 -- # ver1_l=2 00:13:30.916 16:33:08 -- scripts/common.sh@340 -- # ver2_l=1 00:13:30.916 16:33:08 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:30.916 16:33:08 -- scripts/common.sh@343 -- # case "$op" in 00:13:30.916 16:33:08 -- scripts/common.sh@344 -- # : 1 00:13:30.916 16:33:08 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:30.916 16:33:08 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:30.916 16:33:08 -- scripts/common.sh@364 -- # decimal 1 00:13:30.916 16:33:08 -- scripts/common.sh@352 -- # local d=1 00:13:30.916 16:33:08 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:30.916 16:33:08 -- scripts/common.sh@354 -- # echo 1 00:13:30.916 16:33:08 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:30.916 16:33:08 -- scripts/common.sh@365 -- # decimal 2 00:13:30.916 16:33:08 -- scripts/common.sh@352 -- # local d=2 00:13:30.916 16:33:08 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:30.916 16:33:08 -- scripts/common.sh@354 -- # echo 2 00:13:30.916 16:33:08 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:30.916 16:33:08 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:30.916 16:33:08 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:30.916 16:33:08 -- scripts/common.sh@367 -- # return 0 00:13:30.916 16:33:08 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:30.916 16:33:08 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:30.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:30.916 --rc genhtml_branch_coverage=1 00:13:30.916 --rc genhtml_function_coverage=1 00:13:30.916 --rc genhtml_legend=1 00:13:30.916 --rc geninfo_all_blocks=1 00:13:30.916 --rc geninfo_unexecuted_blocks=1 00:13:30.916 00:13:30.916 ' 00:13:30.916 16:33:08 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:30.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:30.916 --rc genhtml_branch_coverage=1 00:13:30.916 --rc genhtml_function_coverage=1 00:13:30.916 --rc genhtml_legend=1 00:13:30.916 --rc geninfo_all_blocks=1 00:13:30.916 --rc geninfo_unexecuted_blocks=1 00:13:30.916 00:13:30.916 ' 00:13:30.916 16:33:08 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:30.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:30.916 --rc genhtml_branch_coverage=1 00:13:30.916 --rc genhtml_function_coverage=1 00:13:30.916 --rc genhtml_legend=1 00:13:30.916 --rc geninfo_all_blocks=1 00:13:30.916 --rc geninfo_unexecuted_blocks=1 00:13:30.916 00:13:30.916 ' 00:13:30.916 16:33:08 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:30.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:30.916 --rc genhtml_branch_coverage=1 00:13:30.916 --rc genhtml_function_coverage=1 00:13:30.916 --rc genhtml_legend=1 00:13:30.916 --rc geninfo_all_blocks=1 00:13:30.916 --rc geninfo_unexecuted_blocks=1 00:13:30.916 00:13:30.916 ' 00:13:30.917 16:33:08 -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:30.917 16:33:08 -- nvmf/common.sh@7 -- # uname -s 00:13:30.917 16:33:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:30.917 16:33:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:30.917 16:33:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:30.917 16:33:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:30.917 16:33:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:30.917 16:33:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:30.917 16:33:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:30.917 16:33:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:30.917 16:33:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:30.917 16:33:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:30.917 16:33:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:13:30.917 16:33:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:13:30.917 16:33:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:30.917 16:33:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:30.917 16:33:08 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:30.917 16:33:08 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:30.917 16:33:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:30.917 16:33:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:30.917 16:33:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:30.917 16:33:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.917 16:33:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.917 16:33:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.917 16:33:08 -- paths/export.sh@5 -- # export PATH 00:13:30.917 16:33:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.917 16:33:08 -- nvmf/common.sh@46 -- # : 0 00:13:30.917 16:33:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:30.917 16:33:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:30.917 16:33:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:30.917 16:33:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:30.917 16:33:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:30.917 16:33:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:30.917 16:33:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:30.917 16:33:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:30.917 16:33:08 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:30.917 16:33:08 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:30.917 16:33:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:30.917 16:33:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:30.917 16:33:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:30.917 16:33:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:30.917 16:33:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:30.917 16:33:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:30.917 16:33:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:30.917 16:33:08 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:30.917 16:33:08 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:30.917 16:33:08 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:30.917 16:33:08 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:30.917 16:33:08 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:30.917 16:33:08 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:30.917 16:33:08 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:30.917 16:33:08 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:30.917 16:33:08 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:30.917 16:33:08 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:30.917 16:33:08 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:30.917 16:33:08 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:30.917 16:33:08 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:30.917 16:33:08 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:30.917 16:33:08 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:30.917 16:33:08 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:30.917 16:33:08 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:30.917 16:33:08 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:30.917 16:33:08 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:30.917 16:33:08 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:30.917 Cannot find device "nvmf_tgt_br" 00:13:30.917 16:33:08 -- nvmf/common.sh@154 -- # true 00:13:30.917 16:33:08 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:30.917 Cannot find device "nvmf_tgt_br2" 00:13:30.917 16:33:08 -- nvmf/common.sh@155 -- # true 00:13:30.917 16:33:08 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:30.917 16:33:08 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:30.917 Cannot find device "nvmf_tgt_br" 00:13:30.917 16:33:08 -- nvmf/common.sh@157 -- # true 00:13:30.917 16:33:08 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:30.917 Cannot find device "nvmf_tgt_br2" 00:13:30.917 16:33:08 -- nvmf/common.sh@158 -- # true 00:13:30.917 16:33:08 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:30.917 16:33:08 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:31.176 16:33:08 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:31.176 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:31.176 16:33:08 -- nvmf/common.sh@161 -- # true 00:13:31.176 16:33:08 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:31.176 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:31.176 16:33:08 -- nvmf/common.sh@162 -- # true 00:13:31.176 16:33:08 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:31.176 16:33:08 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:31.176 16:33:08 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:31.176 16:33:08 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:31.176 16:33:08 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:31.176 16:33:08 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:31.176 16:33:08 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:31.176 16:33:08 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:31.176 16:33:08 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:31.176 16:33:08 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:31.176 16:33:08 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:31.176 16:33:08 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:31.176 16:33:08 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:31.176 16:33:08 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:31.176 16:33:08 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:31.176 16:33:08 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:31.176 16:33:08 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:31.176 16:33:08 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:31.176 16:33:08 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:31.176 16:33:08 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:31.176 16:33:08 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:31.176 16:33:08 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:31.176 16:33:08 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:31.176 16:33:08 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:31.176 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:31.176 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.354 ms 00:13:31.176 00:13:31.176 --- 10.0.0.2 ping statistics --- 00:13:31.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:31.176 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:13:31.176 16:33:08 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:31.176 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:31.176 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:13:31.176 00:13:31.176 --- 10.0.0.3 ping statistics --- 00:13:31.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:31.176 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:13:31.176 16:33:08 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:31.176 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:31.176 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:13:31.176 00:13:31.176 --- 10.0.0.1 ping statistics --- 00:13:31.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:31.176 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:13:31.176 16:33:08 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:31.176 16:33:08 -- nvmf/common.sh@421 -- # return 0 00:13:31.176 16:33:08 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:31.176 16:33:08 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:31.176 16:33:08 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:31.176 16:33:08 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:31.176 16:33:08 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:31.176 16:33:08 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:31.176 16:33:08 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:31.176 16:33:08 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:31.176 16:33:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:31.176 16:33:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:31.176 16:33:08 -- common/autotest_common.sh@10 -- # set +x 00:13:31.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:31.176 16:33:08 -- nvmf/common.sh@469 -- # nvmfpid=82303 00:13:31.176 16:33:08 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:31.176 16:33:08 -- nvmf/common.sh@470 -- # waitforlisten 82303 00:13:31.176 16:33:08 -- common/autotest_common.sh@829 -- # '[' -z 82303 ']' 00:13:31.176 16:33:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:31.176 16:33:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:31.176 16:33:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:31.176 16:33:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:31.176 16:33:08 -- common/autotest_common.sh@10 -- # set +x 00:13:31.435 [2024-11-16 16:33:08.701199] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:31.435 [2024-11-16 16:33:08.701426] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:31.435 [2024-11-16 16:33:08.842671] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:31.435 [2024-11-16 16:33:08.911637] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:31.435 [2024-11-16 16:33:08.912118] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:31.435 [2024-11-16 16:33:08.912306] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:31.435 [2024-11-16 16:33:08.912497] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:31.435 [2024-11-16 16:33:08.912702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:32.371 16:33:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:32.371 16:33:09 -- common/autotest_common.sh@862 -- # return 0 00:13:32.371 16:33:09 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:32.371 16:33:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:32.371 16:33:09 -- common/autotest_common.sh@10 -- # set +x 00:13:32.371 16:33:09 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:32.371 16:33:09 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:32.371 16:33:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.371 16:33:09 -- common/autotest_common.sh@10 -- # set +x 00:13:32.371 [2024-11-16 16:33:09.758574] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:32.371 16:33:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.371 16:33:09 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:32.371 16:33:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.371 16:33:09 -- common/autotest_common.sh@10 -- # set +x 00:13:32.371 16:33:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.371 16:33:09 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:32.371 16:33:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.371 16:33:09 -- common/autotest_common.sh@10 -- # set +x 00:13:32.371 [2024-11-16 16:33:09.774714] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:32.371 16:33:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.371 16:33:09 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:32.371 16:33:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.371 16:33:09 -- common/autotest_common.sh@10 -- # set +x 00:13:32.371 NULL1 00:13:32.371 16:33:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.371 16:33:09 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:32.371 16:33:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.371 16:33:09 -- common/autotest_common.sh@10 -- # set +x 00:13:32.371 16:33:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.371 16:33:09 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:32.371 16:33:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.371 16:33:09 -- common/autotest_common.sh@10 -- # set +x 00:13:32.371 16:33:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.371 16:33:09 -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:32.371 [2024-11-16 16:33:09.820182] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:32.371 [2024-11-16 16:33:09.820223] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82353 ] 00:13:32.938 Attached to nqn.2016-06.io.spdk:cnode1 00:13:32.938 Namespace ID: 1 size: 1GB 00:13:32.938 fused_ordering(0) 00:13:32.938 fused_ordering(1) 00:13:32.938 fused_ordering(2) 00:13:32.938 fused_ordering(3) 00:13:32.938 fused_ordering(4) 00:13:32.938 fused_ordering(5) 00:13:32.938 fused_ordering(6) 00:13:32.938 fused_ordering(7) 00:13:32.938 fused_ordering(8) 00:13:32.938 fused_ordering(9) 00:13:32.938 fused_ordering(10) 00:13:32.938 fused_ordering(11) 00:13:32.938 fused_ordering(12) 00:13:32.938 fused_ordering(13) 00:13:32.938 fused_ordering(14) 00:13:32.938 fused_ordering(15) 00:13:32.938 fused_ordering(16) 00:13:32.938 fused_ordering(17) 00:13:32.938 fused_ordering(18) 00:13:32.938 fused_ordering(19) 00:13:32.938 fused_ordering(20) 00:13:32.938 fused_ordering(21) 00:13:32.938 fused_ordering(22) 00:13:32.938 fused_ordering(23) 00:13:32.938 fused_ordering(24) 00:13:32.938 fused_ordering(25) 00:13:32.938 fused_ordering(26) 00:13:32.938 fused_ordering(27) 00:13:32.938 fused_ordering(28) 00:13:32.938 fused_ordering(29) 00:13:32.938 fused_ordering(30) 00:13:32.938 fused_ordering(31) 00:13:32.938 fused_ordering(32) 00:13:32.938 fused_ordering(33) 00:13:32.938 fused_ordering(34) 00:13:32.938 fused_ordering(35) 00:13:32.938 fused_ordering(36) 00:13:32.938 fused_ordering(37) 00:13:32.938 fused_ordering(38) 00:13:32.938 fused_ordering(39) 00:13:32.938 fused_ordering(40) 00:13:32.938 fused_ordering(41) 00:13:32.938 fused_ordering(42) 00:13:32.938 fused_ordering(43) 00:13:32.938 fused_ordering(44) 00:13:32.938 fused_ordering(45) 00:13:32.938 fused_ordering(46) 00:13:32.938 fused_ordering(47) 00:13:32.938 fused_ordering(48) 00:13:32.938 fused_ordering(49) 00:13:32.938 fused_ordering(50) 00:13:32.938 fused_ordering(51) 00:13:32.938 fused_ordering(52) 00:13:32.938 fused_ordering(53) 00:13:32.938 fused_ordering(54) 00:13:32.938 fused_ordering(55) 00:13:32.938 fused_ordering(56) 00:13:32.938 fused_ordering(57) 00:13:32.938 fused_ordering(58) 00:13:32.938 fused_ordering(59) 00:13:32.938 fused_ordering(60) 00:13:32.938 fused_ordering(61) 00:13:32.938 fused_ordering(62) 00:13:32.938 fused_ordering(63) 00:13:32.938 fused_ordering(64) 00:13:32.938 fused_ordering(65) 00:13:32.938 fused_ordering(66) 00:13:32.938 fused_ordering(67) 00:13:32.938 fused_ordering(68) 00:13:32.938 fused_ordering(69) 00:13:32.938 fused_ordering(70) 00:13:32.938 fused_ordering(71) 00:13:32.938 fused_ordering(72) 00:13:32.938 fused_ordering(73) 00:13:32.938 fused_ordering(74) 00:13:32.938 fused_ordering(75) 00:13:32.938 fused_ordering(76) 00:13:32.938 fused_ordering(77) 00:13:32.938 fused_ordering(78) 00:13:32.938 fused_ordering(79) 00:13:32.938 fused_ordering(80) 00:13:32.938 fused_ordering(81) 00:13:32.938 fused_ordering(82) 00:13:32.938 fused_ordering(83) 00:13:32.938 fused_ordering(84) 00:13:32.938 fused_ordering(85) 00:13:32.938 fused_ordering(86) 00:13:32.938 fused_ordering(87) 00:13:32.939 fused_ordering(88) 00:13:32.939 fused_ordering(89) 00:13:32.939 fused_ordering(90) 00:13:32.939 fused_ordering(91) 00:13:32.939 fused_ordering(92) 00:13:32.939 fused_ordering(93) 00:13:32.939 fused_ordering(94) 00:13:32.939 fused_ordering(95) 00:13:32.939 fused_ordering(96) 00:13:32.939 fused_ordering(97) 00:13:32.939 fused_ordering(98) 00:13:32.939 fused_ordering(99) 00:13:32.939 fused_ordering(100) 00:13:32.939 fused_ordering(101) 00:13:32.939 fused_ordering(102) 00:13:32.939 fused_ordering(103) 00:13:32.939 fused_ordering(104) 00:13:32.939 fused_ordering(105) 00:13:32.939 fused_ordering(106) 00:13:32.939 fused_ordering(107) 00:13:32.939 fused_ordering(108) 00:13:32.939 fused_ordering(109) 00:13:32.939 fused_ordering(110) 00:13:32.939 fused_ordering(111) 00:13:32.939 fused_ordering(112) 00:13:32.939 fused_ordering(113) 00:13:32.939 fused_ordering(114) 00:13:32.939 fused_ordering(115) 00:13:32.939 fused_ordering(116) 00:13:32.939 fused_ordering(117) 00:13:32.939 fused_ordering(118) 00:13:32.939 fused_ordering(119) 00:13:32.939 fused_ordering(120) 00:13:32.939 fused_ordering(121) 00:13:32.939 fused_ordering(122) 00:13:32.939 fused_ordering(123) 00:13:32.939 fused_ordering(124) 00:13:32.939 fused_ordering(125) 00:13:32.939 fused_ordering(126) 00:13:32.939 fused_ordering(127) 00:13:32.939 fused_ordering(128) 00:13:32.939 fused_ordering(129) 00:13:32.939 fused_ordering(130) 00:13:32.939 fused_ordering(131) 00:13:32.939 fused_ordering(132) 00:13:32.939 fused_ordering(133) 00:13:32.939 fused_ordering(134) 00:13:32.939 fused_ordering(135) 00:13:32.939 fused_ordering(136) 00:13:32.939 fused_ordering(137) 00:13:32.939 fused_ordering(138) 00:13:32.939 fused_ordering(139) 00:13:32.939 fused_ordering(140) 00:13:32.939 fused_ordering(141) 00:13:32.939 fused_ordering(142) 00:13:32.939 fused_ordering(143) 00:13:32.939 fused_ordering(144) 00:13:32.939 fused_ordering(145) 00:13:32.939 fused_ordering(146) 00:13:32.939 fused_ordering(147) 00:13:32.939 fused_ordering(148) 00:13:32.939 fused_ordering(149) 00:13:32.939 fused_ordering(150) 00:13:32.939 fused_ordering(151) 00:13:32.939 fused_ordering(152) 00:13:32.939 fused_ordering(153) 00:13:32.939 fused_ordering(154) 00:13:32.939 fused_ordering(155) 00:13:32.939 fused_ordering(156) 00:13:32.939 fused_ordering(157) 00:13:32.939 fused_ordering(158) 00:13:32.939 fused_ordering(159) 00:13:32.939 fused_ordering(160) 00:13:32.939 fused_ordering(161) 00:13:32.939 fused_ordering(162) 00:13:32.939 fused_ordering(163) 00:13:32.939 fused_ordering(164) 00:13:32.939 fused_ordering(165) 00:13:32.939 fused_ordering(166) 00:13:32.939 fused_ordering(167) 00:13:32.939 fused_ordering(168) 00:13:32.939 fused_ordering(169) 00:13:32.939 fused_ordering(170) 00:13:32.939 fused_ordering(171) 00:13:32.939 fused_ordering(172) 00:13:32.939 fused_ordering(173) 00:13:32.939 fused_ordering(174) 00:13:32.939 fused_ordering(175) 00:13:32.939 fused_ordering(176) 00:13:32.939 fused_ordering(177) 00:13:32.939 fused_ordering(178) 00:13:32.939 fused_ordering(179) 00:13:32.939 fused_ordering(180) 00:13:32.939 fused_ordering(181) 00:13:32.939 fused_ordering(182) 00:13:32.939 fused_ordering(183) 00:13:32.939 fused_ordering(184) 00:13:32.939 fused_ordering(185) 00:13:32.939 fused_ordering(186) 00:13:32.939 fused_ordering(187) 00:13:32.939 fused_ordering(188) 00:13:32.939 fused_ordering(189) 00:13:32.939 fused_ordering(190) 00:13:32.939 fused_ordering(191) 00:13:32.939 fused_ordering(192) 00:13:32.939 fused_ordering(193) 00:13:32.939 fused_ordering(194) 00:13:32.939 fused_ordering(195) 00:13:32.939 fused_ordering(196) 00:13:32.939 fused_ordering(197) 00:13:32.939 fused_ordering(198) 00:13:32.939 fused_ordering(199) 00:13:32.939 fused_ordering(200) 00:13:32.939 fused_ordering(201) 00:13:32.939 fused_ordering(202) 00:13:32.939 fused_ordering(203) 00:13:32.939 fused_ordering(204) 00:13:32.939 fused_ordering(205) 00:13:33.198 fused_ordering(206) 00:13:33.198 fused_ordering(207) 00:13:33.198 fused_ordering(208) 00:13:33.198 fused_ordering(209) 00:13:33.198 fused_ordering(210) 00:13:33.198 fused_ordering(211) 00:13:33.198 fused_ordering(212) 00:13:33.198 fused_ordering(213) 00:13:33.198 fused_ordering(214) 00:13:33.198 fused_ordering(215) 00:13:33.198 fused_ordering(216) 00:13:33.198 fused_ordering(217) 00:13:33.198 fused_ordering(218) 00:13:33.198 fused_ordering(219) 00:13:33.198 fused_ordering(220) 00:13:33.198 fused_ordering(221) 00:13:33.198 fused_ordering(222) 00:13:33.198 fused_ordering(223) 00:13:33.198 fused_ordering(224) 00:13:33.198 fused_ordering(225) 00:13:33.198 fused_ordering(226) 00:13:33.198 fused_ordering(227) 00:13:33.198 fused_ordering(228) 00:13:33.198 fused_ordering(229) 00:13:33.198 fused_ordering(230) 00:13:33.198 fused_ordering(231) 00:13:33.198 fused_ordering(232) 00:13:33.198 fused_ordering(233) 00:13:33.198 fused_ordering(234) 00:13:33.198 fused_ordering(235) 00:13:33.198 fused_ordering(236) 00:13:33.198 fused_ordering(237) 00:13:33.198 fused_ordering(238) 00:13:33.198 fused_ordering(239) 00:13:33.198 fused_ordering(240) 00:13:33.198 fused_ordering(241) 00:13:33.198 fused_ordering(242) 00:13:33.198 fused_ordering(243) 00:13:33.198 fused_ordering(244) 00:13:33.198 fused_ordering(245) 00:13:33.198 fused_ordering(246) 00:13:33.198 fused_ordering(247) 00:13:33.198 fused_ordering(248) 00:13:33.198 fused_ordering(249) 00:13:33.198 fused_ordering(250) 00:13:33.198 fused_ordering(251) 00:13:33.198 fused_ordering(252) 00:13:33.198 fused_ordering(253) 00:13:33.198 fused_ordering(254) 00:13:33.198 fused_ordering(255) 00:13:33.198 fused_ordering(256) 00:13:33.198 fused_ordering(257) 00:13:33.198 fused_ordering(258) 00:13:33.198 fused_ordering(259) 00:13:33.198 fused_ordering(260) 00:13:33.198 fused_ordering(261) 00:13:33.198 fused_ordering(262) 00:13:33.198 fused_ordering(263) 00:13:33.198 fused_ordering(264) 00:13:33.198 fused_ordering(265) 00:13:33.198 fused_ordering(266) 00:13:33.198 fused_ordering(267) 00:13:33.198 fused_ordering(268) 00:13:33.198 fused_ordering(269) 00:13:33.198 fused_ordering(270) 00:13:33.198 fused_ordering(271) 00:13:33.198 fused_ordering(272) 00:13:33.198 fused_ordering(273) 00:13:33.198 fused_ordering(274) 00:13:33.198 fused_ordering(275) 00:13:33.198 fused_ordering(276) 00:13:33.198 fused_ordering(277) 00:13:33.198 fused_ordering(278) 00:13:33.198 fused_ordering(279) 00:13:33.198 fused_ordering(280) 00:13:33.198 fused_ordering(281) 00:13:33.198 fused_ordering(282) 00:13:33.198 fused_ordering(283) 00:13:33.198 fused_ordering(284) 00:13:33.198 fused_ordering(285) 00:13:33.198 fused_ordering(286) 00:13:33.198 fused_ordering(287) 00:13:33.198 fused_ordering(288) 00:13:33.198 fused_ordering(289) 00:13:33.198 fused_ordering(290) 00:13:33.198 fused_ordering(291) 00:13:33.198 fused_ordering(292) 00:13:33.198 fused_ordering(293) 00:13:33.198 fused_ordering(294) 00:13:33.198 fused_ordering(295) 00:13:33.198 fused_ordering(296) 00:13:33.198 fused_ordering(297) 00:13:33.198 fused_ordering(298) 00:13:33.198 fused_ordering(299) 00:13:33.198 fused_ordering(300) 00:13:33.198 fused_ordering(301) 00:13:33.198 fused_ordering(302) 00:13:33.198 fused_ordering(303) 00:13:33.198 fused_ordering(304) 00:13:33.198 fused_ordering(305) 00:13:33.198 fused_ordering(306) 00:13:33.198 fused_ordering(307) 00:13:33.198 fused_ordering(308) 00:13:33.198 fused_ordering(309) 00:13:33.198 fused_ordering(310) 00:13:33.198 fused_ordering(311) 00:13:33.198 fused_ordering(312) 00:13:33.199 fused_ordering(313) 00:13:33.199 fused_ordering(314) 00:13:33.199 fused_ordering(315) 00:13:33.199 fused_ordering(316) 00:13:33.199 fused_ordering(317) 00:13:33.199 fused_ordering(318) 00:13:33.199 fused_ordering(319) 00:13:33.199 fused_ordering(320) 00:13:33.199 fused_ordering(321) 00:13:33.199 fused_ordering(322) 00:13:33.199 fused_ordering(323) 00:13:33.199 fused_ordering(324) 00:13:33.199 fused_ordering(325) 00:13:33.199 fused_ordering(326) 00:13:33.199 fused_ordering(327) 00:13:33.199 fused_ordering(328) 00:13:33.199 fused_ordering(329) 00:13:33.199 fused_ordering(330) 00:13:33.199 fused_ordering(331) 00:13:33.199 fused_ordering(332) 00:13:33.199 fused_ordering(333) 00:13:33.199 fused_ordering(334) 00:13:33.199 fused_ordering(335) 00:13:33.199 fused_ordering(336) 00:13:33.199 fused_ordering(337) 00:13:33.199 fused_ordering(338) 00:13:33.199 fused_ordering(339) 00:13:33.199 fused_ordering(340) 00:13:33.199 fused_ordering(341) 00:13:33.199 fused_ordering(342) 00:13:33.199 fused_ordering(343) 00:13:33.199 fused_ordering(344) 00:13:33.199 fused_ordering(345) 00:13:33.199 fused_ordering(346) 00:13:33.199 fused_ordering(347) 00:13:33.199 fused_ordering(348) 00:13:33.199 fused_ordering(349) 00:13:33.199 fused_ordering(350) 00:13:33.199 fused_ordering(351) 00:13:33.199 fused_ordering(352) 00:13:33.199 fused_ordering(353) 00:13:33.199 fused_ordering(354) 00:13:33.199 fused_ordering(355) 00:13:33.199 fused_ordering(356) 00:13:33.199 fused_ordering(357) 00:13:33.199 fused_ordering(358) 00:13:33.199 fused_ordering(359) 00:13:33.199 fused_ordering(360) 00:13:33.199 fused_ordering(361) 00:13:33.199 fused_ordering(362) 00:13:33.199 fused_ordering(363) 00:13:33.199 fused_ordering(364) 00:13:33.199 fused_ordering(365) 00:13:33.199 fused_ordering(366) 00:13:33.199 fused_ordering(367) 00:13:33.199 fused_ordering(368) 00:13:33.199 fused_ordering(369) 00:13:33.199 fused_ordering(370) 00:13:33.199 fused_ordering(371) 00:13:33.199 fused_ordering(372) 00:13:33.199 fused_ordering(373) 00:13:33.199 fused_ordering(374) 00:13:33.199 fused_ordering(375) 00:13:33.199 fused_ordering(376) 00:13:33.199 fused_ordering(377) 00:13:33.199 fused_ordering(378) 00:13:33.199 fused_ordering(379) 00:13:33.199 fused_ordering(380) 00:13:33.199 fused_ordering(381) 00:13:33.199 fused_ordering(382) 00:13:33.199 fused_ordering(383) 00:13:33.199 fused_ordering(384) 00:13:33.199 fused_ordering(385) 00:13:33.199 fused_ordering(386) 00:13:33.199 fused_ordering(387) 00:13:33.199 fused_ordering(388) 00:13:33.199 fused_ordering(389) 00:13:33.199 fused_ordering(390) 00:13:33.199 fused_ordering(391) 00:13:33.199 fused_ordering(392) 00:13:33.199 fused_ordering(393) 00:13:33.199 fused_ordering(394) 00:13:33.199 fused_ordering(395) 00:13:33.199 fused_ordering(396) 00:13:33.199 fused_ordering(397) 00:13:33.199 fused_ordering(398) 00:13:33.199 fused_ordering(399) 00:13:33.199 fused_ordering(400) 00:13:33.199 fused_ordering(401) 00:13:33.199 fused_ordering(402) 00:13:33.199 fused_ordering(403) 00:13:33.199 fused_ordering(404) 00:13:33.199 fused_ordering(405) 00:13:33.199 fused_ordering(406) 00:13:33.199 fused_ordering(407) 00:13:33.199 fused_ordering(408) 00:13:33.199 fused_ordering(409) 00:13:33.199 fused_ordering(410) 00:13:33.458 fused_ordering(411) 00:13:33.458 fused_ordering(412) 00:13:33.458 fused_ordering(413) 00:13:33.458 fused_ordering(414) 00:13:33.458 fused_ordering(415) 00:13:33.458 fused_ordering(416) 00:13:33.458 fused_ordering(417) 00:13:33.458 fused_ordering(418) 00:13:33.458 fused_ordering(419) 00:13:33.458 fused_ordering(420) 00:13:33.458 fused_ordering(421) 00:13:33.458 fused_ordering(422) 00:13:33.458 fused_ordering(423) 00:13:33.458 fused_ordering(424) 00:13:33.458 fused_ordering(425) 00:13:33.458 fused_ordering(426) 00:13:33.458 fused_ordering(427) 00:13:33.458 fused_ordering(428) 00:13:33.458 fused_ordering(429) 00:13:33.458 fused_ordering(430) 00:13:33.458 fused_ordering(431) 00:13:33.458 fused_ordering(432) 00:13:33.458 fused_ordering(433) 00:13:33.458 fused_ordering(434) 00:13:33.458 fused_ordering(435) 00:13:33.458 fused_ordering(436) 00:13:33.458 fused_ordering(437) 00:13:33.458 fused_ordering(438) 00:13:33.458 fused_ordering(439) 00:13:33.458 fused_ordering(440) 00:13:33.458 fused_ordering(441) 00:13:33.458 fused_ordering(442) 00:13:33.458 fused_ordering(443) 00:13:33.458 fused_ordering(444) 00:13:33.458 fused_ordering(445) 00:13:33.458 fused_ordering(446) 00:13:33.458 fused_ordering(447) 00:13:33.458 fused_ordering(448) 00:13:33.458 fused_ordering(449) 00:13:33.458 fused_ordering(450) 00:13:33.458 fused_ordering(451) 00:13:33.458 fused_ordering(452) 00:13:33.458 fused_ordering(453) 00:13:33.458 fused_ordering(454) 00:13:33.458 fused_ordering(455) 00:13:33.458 fused_ordering(456) 00:13:33.458 fused_ordering(457) 00:13:33.458 fused_ordering(458) 00:13:33.458 fused_ordering(459) 00:13:33.458 fused_ordering(460) 00:13:33.458 fused_ordering(461) 00:13:33.458 fused_ordering(462) 00:13:33.458 fused_ordering(463) 00:13:33.458 fused_ordering(464) 00:13:33.458 fused_ordering(465) 00:13:33.458 fused_ordering(466) 00:13:33.458 fused_ordering(467) 00:13:33.458 fused_ordering(468) 00:13:33.458 fused_ordering(469) 00:13:33.458 fused_ordering(470) 00:13:33.458 fused_ordering(471) 00:13:33.458 fused_ordering(472) 00:13:33.458 fused_ordering(473) 00:13:33.458 fused_ordering(474) 00:13:33.458 fused_ordering(475) 00:13:33.458 fused_ordering(476) 00:13:33.458 fused_ordering(477) 00:13:33.458 fused_ordering(478) 00:13:33.458 fused_ordering(479) 00:13:33.458 fused_ordering(480) 00:13:33.458 fused_ordering(481) 00:13:33.458 fused_ordering(482) 00:13:33.458 fused_ordering(483) 00:13:33.458 fused_ordering(484) 00:13:33.458 fused_ordering(485) 00:13:33.458 fused_ordering(486) 00:13:33.459 fused_ordering(487) 00:13:33.459 fused_ordering(488) 00:13:33.459 fused_ordering(489) 00:13:33.459 fused_ordering(490) 00:13:33.459 fused_ordering(491) 00:13:33.459 fused_ordering(492) 00:13:33.459 fused_ordering(493) 00:13:33.459 fused_ordering(494) 00:13:33.459 fused_ordering(495) 00:13:33.459 fused_ordering(496) 00:13:33.459 fused_ordering(497) 00:13:33.459 fused_ordering(498) 00:13:33.459 fused_ordering(499) 00:13:33.459 fused_ordering(500) 00:13:33.459 fused_ordering(501) 00:13:33.459 fused_ordering(502) 00:13:33.459 fused_ordering(503) 00:13:33.459 fused_ordering(504) 00:13:33.459 fused_ordering(505) 00:13:33.459 fused_ordering(506) 00:13:33.459 fused_ordering(507) 00:13:33.459 fused_ordering(508) 00:13:33.459 fused_ordering(509) 00:13:33.459 fused_ordering(510) 00:13:33.459 fused_ordering(511) 00:13:33.459 fused_ordering(512) 00:13:33.459 fused_ordering(513) 00:13:33.459 fused_ordering(514) 00:13:33.459 fused_ordering(515) 00:13:33.459 fused_ordering(516) 00:13:33.459 fused_ordering(517) 00:13:33.459 fused_ordering(518) 00:13:33.459 fused_ordering(519) 00:13:33.459 fused_ordering(520) 00:13:33.459 fused_ordering(521) 00:13:33.459 fused_ordering(522) 00:13:33.459 fused_ordering(523) 00:13:33.459 fused_ordering(524) 00:13:33.459 fused_ordering(525) 00:13:33.459 fused_ordering(526) 00:13:33.459 fused_ordering(527) 00:13:33.459 fused_ordering(528) 00:13:33.459 fused_ordering(529) 00:13:33.459 fused_ordering(530) 00:13:33.459 fused_ordering(531) 00:13:33.459 fused_ordering(532) 00:13:33.459 fused_ordering(533) 00:13:33.459 fused_ordering(534) 00:13:33.459 fused_ordering(535) 00:13:33.459 fused_ordering(536) 00:13:33.459 fused_ordering(537) 00:13:33.459 fused_ordering(538) 00:13:33.459 fused_ordering(539) 00:13:33.459 fused_ordering(540) 00:13:33.459 fused_ordering(541) 00:13:33.459 fused_ordering(542) 00:13:33.459 fused_ordering(543) 00:13:33.459 fused_ordering(544) 00:13:33.459 fused_ordering(545) 00:13:33.459 fused_ordering(546) 00:13:33.459 fused_ordering(547) 00:13:33.459 fused_ordering(548) 00:13:33.459 fused_ordering(549) 00:13:33.459 fused_ordering(550) 00:13:33.459 fused_ordering(551) 00:13:33.459 fused_ordering(552) 00:13:33.459 fused_ordering(553) 00:13:33.459 fused_ordering(554) 00:13:33.459 fused_ordering(555) 00:13:33.459 fused_ordering(556) 00:13:33.459 fused_ordering(557) 00:13:33.459 fused_ordering(558) 00:13:33.459 fused_ordering(559) 00:13:33.459 fused_ordering(560) 00:13:33.459 fused_ordering(561) 00:13:33.459 fused_ordering(562) 00:13:33.459 fused_ordering(563) 00:13:33.459 fused_ordering(564) 00:13:33.459 fused_ordering(565) 00:13:33.459 fused_ordering(566) 00:13:33.459 fused_ordering(567) 00:13:33.459 fused_ordering(568) 00:13:33.459 fused_ordering(569) 00:13:33.459 fused_ordering(570) 00:13:33.459 fused_ordering(571) 00:13:33.459 fused_ordering(572) 00:13:33.459 fused_ordering(573) 00:13:33.459 fused_ordering(574) 00:13:33.459 fused_ordering(575) 00:13:33.459 fused_ordering(576) 00:13:33.459 fused_ordering(577) 00:13:33.459 fused_ordering(578) 00:13:33.459 fused_ordering(579) 00:13:33.459 fused_ordering(580) 00:13:33.459 fused_ordering(581) 00:13:33.459 fused_ordering(582) 00:13:33.459 fused_ordering(583) 00:13:33.459 fused_ordering(584) 00:13:33.459 fused_ordering(585) 00:13:33.459 fused_ordering(586) 00:13:33.459 fused_ordering(587) 00:13:33.459 fused_ordering(588) 00:13:33.459 fused_ordering(589) 00:13:33.459 fused_ordering(590) 00:13:33.459 fused_ordering(591) 00:13:33.459 fused_ordering(592) 00:13:33.459 fused_ordering(593) 00:13:33.459 fused_ordering(594) 00:13:33.459 fused_ordering(595) 00:13:33.459 fused_ordering(596) 00:13:33.459 fused_ordering(597) 00:13:33.459 fused_ordering(598) 00:13:33.459 fused_ordering(599) 00:13:33.459 fused_ordering(600) 00:13:33.459 fused_ordering(601) 00:13:33.459 fused_ordering(602) 00:13:33.459 fused_ordering(603) 00:13:33.459 fused_ordering(604) 00:13:33.459 fused_ordering(605) 00:13:33.459 fused_ordering(606) 00:13:33.459 fused_ordering(607) 00:13:33.459 fused_ordering(608) 00:13:33.459 fused_ordering(609) 00:13:33.459 fused_ordering(610) 00:13:33.459 fused_ordering(611) 00:13:33.459 fused_ordering(612) 00:13:33.459 fused_ordering(613) 00:13:33.459 fused_ordering(614) 00:13:33.459 fused_ordering(615) 00:13:33.718 fused_ordering(616) 00:13:33.718 fused_ordering(617) 00:13:33.718 fused_ordering(618) 00:13:33.718 fused_ordering(619) 00:13:33.718 fused_ordering(620) 00:13:33.718 fused_ordering(621) 00:13:33.718 fused_ordering(622) 00:13:33.718 fused_ordering(623) 00:13:33.718 fused_ordering(624) 00:13:33.718 fused_ordering(625) 00:13:33.718 fused_ordering(626) 00:13:33.718 fused_ordering(627) 00:13:33.718 fused_ordering(628) 00:13:33.718 fused_ordering(629) 00:13:33.718 fused_ordering(630) 00:13:33.718 fused_ordering(631) 00:13:33.718 fused_ordering(632) 00:13:33.718 fused_ordering(633) 00:13:33.718 fused_ordering(634) 00:13:33.718 fused_ordering(635) 00:13:33.718 fused_ordering(636) 00:13:33.718 fused_ordering(637) 00:13:33.718 fused_ordering(638) 00:13:33.718 fused_ordering(639) 00:13:33.718 fused_ordering(640) 00:13:33.718 fused_ordering(641) 00:13:33.718 fused_ordering(642) 00:13:33.718 fused_ordering(643) 00:13:33.718 fused_ordering(644) 00:13:33.718 fused_ordering(645) 00:13:33.718 fused_ordering(646) 00:13:33.718 fused_ordering(647) 00:13:33.718 fused_ordering(648) 00:13:33.718 fused_ordering(649) 00:13:33.718 fused_ordering(650) 00:13:33.718 fused_ordering(651) 00:13:33.718 fused_ordering(652) 00:13:33.718 fused_ordering(653) 00:13:33.718 fused_ordering(654) 00:13:33.718 fused_ordering(655) 00:13:33.718 fused_ordering(656) 00:13:33.718 fused_ordering(657) 00:13:33.718 fused_ordering(658) 00:13:33.718 fused_ordering(659) 00:13:33.718 fused_ordering(660) 00:13:33.718 fused_ordering(661) 00:13:33.718 fused_ordering(662) 00:13:33.718 fused_ordering(663) 00:13:33.718 fused_ordering(664) 00:13:33.718 fused_ordering(665) 00:13:33.718 fused_ordering(666) 00:13:33.718 fused_ordering(667) 00:13:33.718 fused_ordering(668) 00:13:33.718 fused_ordering(669) 00:13:33.718 fused_ordering(670) 00:13:33.718 fused_ordering(671) 00:13:33.718 fused_ordering(672) 00:13:33.718 fused_ordering(673) 00:13:33.718 fused_ordering(674) 00:13:33.718 fused_ordering(675) 00:13:33.718 fused_ordering(676) 00:13:33.718 fused_ordering(677) 00:13:33.718 fused_ordering(678) 00:13:33.718 fused_ordering(679) 00:13:33.718 fused_ordering(680) 00:13:33.718 fused_ordering(681) 00:13:33.718 fused_ordering(682) 00:13:33.718 fused_ordering(683) 00:13:33.718 fused_ordering(684) 00:13:33.718 fused_ordering(685) 00:13:33.718 fused_ordering(686) 00:13:33.718 fused_ordering(687) 00:13:33.718 fused_ordering(688) 00:13:33.718 fused_ordering(689) 00:13:33.718 fused_ordering(690) 00:13:33.718 fused_ordering(691) 00:13:33.718 fused_ordering(692) 00:13:33.718 fused_ordering(693) 00:13:33.718 fused_ordering(694) 00:13:33.718 fused_ordering(695) 00:13:33.718 fused_ordering(696) 00:13:33.718 fused_ordering(697) 00:13:33.718 fused_ordering(698) 00:13:33.718 fused_ordering(699) 00:13:33.718 fused_ordering(700) 00:13:33.718 fused_ordering(701) 00:13:33.718 fused_ordering(702) 00:13:33.718 fused_ordering(703) 00:13:33.718 fused_ordering(704) 00:13:33.718 fused_ordering(705) 00:13:33.718 fused_ordering(706) 00:13:33.718 fused_ordering(707) 00:13:33.718 fused_ordering(708) 00:13:33.718 fused_ordering(709) 00:13:33.718 fused_ordering(710) 00:13:33.718 fused_ordering(711) 00:13:33.718 fused_ordering(712) 00:13:33.718 fused_ordering(713) 00:13:33.718 fused_ordering(714) 00:13:33.718 fused_ordering(715) 00:13:33.718 fused_ordering(716) 00:13:33.718 fused_ordering(717) 00:13:33.718 fused_ordering(718) 00:13:33.719 fused_ordering(719) 00:13:33.719 fused_ordering(720) 00:13:33.719 fused_ordering(721) 00:13:33.719 fused_ordering(722) 00:13:33.719 fused_ordering(723) 00:13:33.719 fused_ordering(724) 00:13:33.719 fused_ordering(725) 00:13:33.719 fused_ordering(726) 00:13:33.719 fused_ordering(727) 00:13:33.719 fused_ordering(728) 00:13:33.719 fused_ordering(729) 00:13:33.719 fused_ordering(730) 00:13:33.719 fused_ordering(731) 00:13:33.719 fused_ordering(732) 00:13:33.719 fused_ordering(733) 00:13:33.719 fused_ordering(734) 00:13:33.719 fused_ordering(735) 00:13:33.719 fused_ordering(736) 00:13:33.719 fused_ordering(737) 00:13:33.719 fused_ordering(738) 00:13:33.719 fused_ordering(739) 00:13:33.719 fused_ordering(740) 00:13:33.719 fused_ordering(741) 00:13:33.719 fused_ordering(742) 00:13:33.719 fused_ordering(743) 00:13:33.719 fused_ordering(744) 00:13:33.719 fused_ordering(745) 00:13:33.719 fused_ordering(746) 00:13:33.719 fused_ordering(747) 00:13:33.719 fused_ordering(748) 00:13:33.719 fused_ordering(749) 00:13:33.719 fused_ordering(750) 00:13:33.719 fused_ordering(751) 00:13:33.719 fused_ordering(752) 00:13:33.719 fused_ordering(753) 00:13:33.719 fused_ordering(754) 00:13:33.719 fused_ordering(755) 00:13:33.719 fused_ordering(756) 00:13:33.719 fused_ordering(757) 00:13:33.719 fused_ordering(758) 00:13:33.719 fused_ordering(759) 00:13:33.719 fused_ordering(760) 00:13:33.719 fused_ordering(761) 00:13:33.719 fused_ordering(762) 00:13:33.719 fused_ordering(763) 00:13:33.719 fused_ordering(764) 00:13:33.719 fused_ordering(765) 00:13:33.719 fused_ordering(766) 00:13:33.719 fused_ordering(767) 00:13:33.719 fused_ordering(768) 00:13:33.719 fused_ordering(769) 00:13:33.719 fused_ordering(770) 00:13:33.719 fused_ordering(771) 00:13:33.719 fused_ordering(772) 00:13:33.719 fused_ordering(773) 00:13:33.719 fused_ordering(774) 00:13:33.719 fused_ordering(775) 00:13:33.719 fused_ordering(776) 00:13:33.719 fused_ordering(777) 00:13:33.719 fused_ordering(778) 00:13:33.719 fused_ordering(779) 00:13:33.719 fused_ordering(780) 00:13:33.719 fused_ordering(781) 00:13:33.719 fused_ordering(782) 00:13:33.719 fused_ordering(783) 00:13:33.719 fused_ordering(784) 00:13:33.719 fused_ordering(785) 00:13:33.719 fused_ordering(786) 00:13:33.719 fused_ordering(787) 00:13:33.719 fused_ordering(788) 00:13:33.719 fused_ordering(789) 00:13:33.719 fused_ordering(790) 00:13:33.719 fused_ordering(791) 00:13:33.719 fused_ordering(792) 00:13:33.719 fused_ordering(793) 00:13:33.719 fused_ordering(794) 00:13:33.719 fused_ordering(795) 00:13:33.719 fused_ordering(796) 00:13:33.719 fused_ordering(797) 00:13:33.719 fused_ordering(798) 00:13:33.719 fused_ordering(799) 00:13:33.719 fused_ordering(800) 00:13:33.719 fused_ordering(801) 00:13:33.719 fused_ordering(802) 00:13:33.719 fused_ordering(803) 00:13:33.719 fused_ordering(804) 00:13:33.719 fused_ordering(805) 00:13:33.719 fused_ordering(806) 00:13:33.719 fused_ordering(807) 00:13:33.719 fused_ordering(808) 00:13:33.719 fused_ordering(809) 00:13:33.719 fused_ordering(810) 00:13:33.719 fused_ordering(811) 00:13:33.719 fused_ordering(812) 00:13:33.719 fused_ordering(813) 00:13:33.719 fused_ordering(814) 00:13:33.719 fused_ordering(815) 00:13:33.719 fused_ordering(816) 00:13:33.719 fused_ordering(817) 00:13:33.719 fused_ordering(818) 00:13:33.719 fused_ordering(819) 00:13:33.719 fused_ordering(820) 00:13:34.287 fused_ordering(821) 00:13:34.287 fused_ordering(822) 00:13:34.287 fused_ordering(823) 00:13:34.287 fused_ordering(824) 00:13:34.287 fused_ordering(825) 00:13:34.287 fused_ordering(826) 00:13:34.287 fused_ordering(827) 00:13:34.287 fused_ordering(828) 00:13:34.287 fused_ordering(829) 00:13:34.287 fused_ordering(830) 00:13:34.287 fused_ordering(831) 00:13:34.287 fused_ordering(832) 00:13:34.287 fused_ordering(833) 00:13:34.287 fused_ordering(834) 00:13:34.287 fused_ordering(835) 00:13:34.287 fused_ordering(836) 00:13:34.287 fused_ordering(837) 00:13:34.287 fused_ordering(838) 00:13:34.287 fused_ordering(839) 00:13:34.287 fused_ordering(840) 00:13:34.287 fused_ordering(841) 00:13:34.287 fused_ordering(842) 00:13:34.287 fused_ordering(843) 00:13:34.287 fused_ordering(844) 00:13:34.287 fused_ordering(845) 00:13:34.287 fused_ordering(846) 00:13:34.287 fused_ordering(847) 00:13:34.287 fused_ordering(848) 00:13:34.287 fused_ordering(849) 00:13:34.287 fused_ordering(850) 00:13:34.287 fused_ordering(851) 00:13:34.287 fused_ordering(852) 00:13:34.287 fused_ordering(853) 00:13:34.287 fused_ordering(854) 00:13:34.287 fused_ordering(855) 00:13:34.287 fused_ordering(856) 00:13:34.287 fused_ordering(857) 00:13:34.287 fused_ordering(858) 00:13:34.287 fused_ordering(859) 00:13:34.287 fused_ordering(860) 00:13:34.287 fused_ordering(861) 00:13:34.287 fused_ordering(862) 00:13:34.287 fused_ordering(863) 00:13:34.287 fused_ordering(864) 00:13:34.287 fused_ordering(865) 00:13:34.287 fused_ordering(866) 00:13:34.287 fused_ordering(867) 00:13:34.287 fused_ordering(868) 00:13:34.287 fused_ordering(869) 00:13:34.287 fused_ordering(870) 00:13:34.287 fused_ordering(871) 00:13:34.287 fused_ordering(872) 00:13:34.287 fused_ordering(873) 00:13:34.287 fused_ordering(874) 00:13:34.287 fused_ordering(875) 00:13:34.287 fused_ordering(876) 00:13:34.287 fused_ordering(877) 00:13:34.287 fused_ordering(878) 00:13:34.287 fused_ordering(879) 00:13:34.287 fused_ordering(880) 00:13:34.287 fused_ordering(881) 00:13:34.287 fused_ordering(882) 00:13:34.287 fused_ordering(883) 00:13:34.287 fused_ordering(884) 00:13:34.287 fused_ordering(885) 00:13:34.287 fused_ordering(886) 00:13:34.287 fused_ordering(887) 00:13:34.287 fused_ordering(888) 00:13:34.287 fused_ordering(889) 00:13:34.287 fused_ordering(890) 00:13:34.287 fused_ordering(891) 00:13:34.287 fused_ordering(892) 00:13:34.287 fused_ordering(893) 00:13:34.287 fused_ordering(894) 00:13:34.287 fused_ordering(895) 00:13:34.287 fused_ordering(896) 00:13:34.287 fused_ordering(897) 00:13:34.287 fused_ordering(898) 00:13:34.287 fused_ordering(899) 00:13:34.287 fused_ordering(900) 00:13:34.287 fused_ordering(901) 00:13:34.287 fused_ordering(902) 00:13:34.287 fused_ordering(903) 00:13:34.287 fused_ordering(904) 00:13:34.287 fused_ordering(905) 00:13:34.287 fused_ordering(906) 00:13:34.287 fused_ordering(907) 00:13:34.287 fused_ordering(908) 00:13:34.287 fused_ordering(909) 00:13:34.287 fused_ordering(910) 00:13:34.287 fused_ordering(911) 00:13:34.287 fused_ordering(912) 00:13:34.287 fused_ordering(913) 00:13:34.287 fused_ordering(914) 00:13:34.287 fused_ordering(915) 00:13:34.287 fused_ordering(916) 00:13:34.287 fused_ordering(917) 00:13:34.287 fused_ordering(918) 00:13:34.287 fused_ordering(919) 00:13:34.287 fused_ordering(920) 00:13:34.287 fused_ordering(921) 00:13:34.287 fused_ordering(922) 00:13:34.287 fused_ordering(923) 00:13:34.287 fused_ordering(924) 00:13:34.287 fused_ordering(925) 00:13:34.287 fused_ordering(926) 00:13:34.287 fused_ordering(927) 00:13:34.287 fused_ordering(928) 00:13:34.287 fused_ordering(929) 00:13:34.287 fused_ordering(930) 00:13:34.287 fused_ordering(931) 00:13:34.287 fused_ordering(932) 00:13:34.287 fused_ordering(933) 00:13:34.287 fused_ordering(934) 00:13:34.287 fused_ordering(935) 00:13:34.287 fused_ordering(936) 00:13:34.287 fused_ordering(937) 00:13:34.287 fused_ordering(938) 00:13:34.287 fused_ordering(939) 00:13:34.287 fused_ordering(940) 00:13:34.287 fused_ordering(941) 00:13:34.287 fused_ordering(942) 00:13:34.287 fused_ordering(943) 00:13:34.287 fused_ordering(944) 00:13:34.287 fused_ordering(945) 00:13:34.287 fused_ordering(946) 00:13:34.287 fused_ordering(947) 00:13:34.287 fused_ordering(948) 00:13:34.287 fused_ordering(949) 00:13:34.287 fused_ordering(950) 00:13:34.287 fused_ordering(951) 00:13:34.287 fused_ordering(952) 00:13:34.287 fused_ordering(953) 00:13:34.287 fused_ordering(954) 00:13:34.287 fused_ordering(955) 00:13:34.287 fused_ordering(956) 00:13:34.287 fused_ordering(957) 00:13:34.287 fused_ordering(958) 00:13:34.287 fused_ordering(959) 00:13:34.287 fused_ordering(960) 00:13:34.287 fused_ordering(961) 00:13:34.287 fused_ordering(962) 00:13:34.287 fused_ordering(963) 00:13:34.287 fused_ordering(964) 00:13:34.287 fused_ordering(965) 00:13:34.287 fused_ordering(966) 00:13:34.287 fused_ordering(967) 00:13:34.287 fused_ordering(968) 00:13:34.287 fused_ordering(969) 00:13:34.287 fused_ordering(970) 00:13:34.287 fused_ordering(971) 00:13:34.287 fused_ordering(972) 00:13:34.287 fused_ordering(973) 00:13:34.288 fused_ordering(974) 00:13:34.288 fused_ordering(975) 00:13:34.288 fused_ordering(976) 00:13:34.288 fused_ordering(977) 00:13:34.288 fused_ordering(978) 00:13:34.288 fused_ordering(979) 00:13:34.288 fused_ordering(980) 00:13:34.288 fused_ordering(981) 00:13:34.288 fused_ordering(982) 00:13:34.288 fused_ordering(983) 00:13:34.288 fused_ordering(984) 00:13:34.288 fused_ordering(985) 00:13:34.288 fused_ordering(986) 00:13:34.288 fused_ordering(987) 00:13:34.288 fused_ordering(988) 00:13:34.288 fused_ordering(989) 00:13:34.288 fused_ordering(990) 00:13:34.288 fused_ordering(991) 00:13:34.288 fused_ordering(992) 00:13:34.288 fused_ordering(993) 00:13:34.288 fused_ordering(994) 00:13:34.288 fused_ordering(995) 00:13:34.288 fused_ordering(996) 00:13:34.288 fused_ordering(997) 00:13:34.288 fused_ordering(998) 00:13:34.288 fused_ordering(999) 00:13:34.288 fused_ordering(1000) 00:13:34.288 fused_ordering(1001) 00:13:34.288 fused_ordering(1002) 00:13:34.288 fused_ordering(1003) 00:13:34.288 fused_ordering(1004) 00:13:34.288 fused_ordering(1005) 00:13:34.288 fused_ordering(1006) 00:13:34.288 fused_ordering(1007) 00:13:34.288 fused_ordering(1008) 00:13:34.288 fused_ordering(1009) 00:13:34.288 fused_ordering(1010) 00:13:34.288 fused_ordering(1011) 00:13:34.288 fused_ordering(1012) 00:13:34.288 fused_ordering(1013) 00:13:34.288 fused_ordering(1014) 00:13:34.288 fused_ordering(1015) 00:13:34.288 fused_ordering(1016) 00:13:34.288 fused_ordering(1017) 00:13:34.288 fused_ordering(1018) 00:13:34.288 fused_ordering(1019) 00:13:34.288 fused_ordering(1020) 00:13:34.288 fused_ordering(1021) 00:13:34.288 fused_ordering(1022) 00:13:34.288 fused_ordering(1023) 00:13:34.288 16:33:11 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:34.288 16:33:11 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:34.288 16:33:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:34.288 16:33:11 -- nvmf/common.sh@116 -- # sync 00:13:34.288 16:33:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:34.288 16:33:11 -- nvmf/common.sh@119 -- # set +e 00:13:34.288 16:33:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:34.288 16:33:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:34.288 rmmod nvme_tcp 00:13:34.288 rmmod nvme_fabrics 00:13:34.288 rmmod nvme_keyring 00:13:34.288 16:33:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:34.288 16:33:11 -- nvmf/common.sh@123 -- # set -e 00:13:34.288 16:33:11 -- nvmf/common.sh@124 -- # return 0 00:13:34.288 16:33:11 -- nvmf/common.sh@477 -- # '[' -n 82303 ']' 00:13:34.288 16:33:11 -- nvmf/common.sh@478 -- # killprocess 82303 00:13:34.288 16:33:11 -- common/autotest_common.sh@936 -- # '[' -z 82303 ']' 00:13:34.288 16:33:11 -- common/autotest_common.sh@940 -- # kill -0 82303 00:13:34.288 16:33:11 -- common/autotest_common.sh@941 -- # uname 00:13:34.288 16:33:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:34.288 16:33:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82303 00:13:34.288 killing process with pid 82303 00:13:34.288 16:33:11 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:34.288 16:33:11 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:34.288 16:33:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82303' 00:13:34.288 16:33:11 -- common/autotest_common.sh@955 -- # kill 82303 00:13:34.288 16:33:11 -- common/autotest_common.sh@960 -- # wait 82303 00:13:34.547 16:33:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:34.547 16:33:11 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:34.547 16:33:11 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:34.547 16:33:11 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:34.547 16:33:11 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:34.547 16:33:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:34.547 16:33:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:34.547 16:33:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:34.547 16:33:11 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:34.547 ************************************ 00:13:34.547 END TEST nvmf_fused_ordering 00:13:34.547 ************************************ 00:13:34.547 00:13:34.547 real 0m3.894s 00:13:34.547 user 0m4.394s 00:13:34.547 sys 0m1.410s 00:13:34.547 16:33:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:34.547 16:33:11 -- common/autotest_common.sh@10 -- # set +x 00:13:34.547 16:33:12 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:13:34.547 16:33:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:34.547 16:33:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:34.547 16:33:12 -- common/autotest_common.sh@10 -- # set +x 00:13:34.547 ************************************ 00:13:34.547 START TEST nvmf_delete_subsystem 00:13:34.547 ************************************ 00:13:34.547 16:33:12 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:13:34.806 * Looking for test storage... 00:13:34.806 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:34.806 16:33:12 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:34.806 16:33:12 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:34.806 16:33:12 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:34.806 16:33:12 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:34.806 16:33:12 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:34.806 16:33:12 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:34.807 16:33:12 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:34.807 16:33:12 -- scripts/common.sh@335 -- # IFS=.-: 00:13:34.807 16:33:12 -- scripts/common.sh@335 -- # read -ra ver1 00:13:34.807 16:33:12 -- scripts/common.sh@336 -- # IFS=.-: 00:13:34.807 16:33:12 -- scripts/common.sh@336 -- # read -ra ver2 00:13:34.807 16:33:12 -- scripts/common.sh@337 -- # local 'op=<' 00:13:34.807 16:33:12 -- scripts/common.sh@339 -- # ver1_l=2 00:13:34.807 16:33:12 -- scripts/common.sh@340 -- # ver2_l=1 00:13:34.807 16:33:12 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:34.807 16:33:12 -- scripts/common.sh@343 -- # case "$op" in 00:13:34.807 16:33:12 -- scripts/common.sh@344 -- # : 1 00:13:34.807 16:33:12 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:34.807 16:33:12 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:34.807 16:33:12 -- scripts/common.sh@364 -- # decimal 1 00:13:34.807 16:33:12 -- scripts/common.sh@352 -- # local d=1 00:13:34.807 16:33:12 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:34.807 16:33:12 -- scripts/common.sh@354 -- # echo 1 00:13:34.807 16:33:12 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:34.807 16:33:12 -- scripts/common.sh@365 -- # decimal 2 00:13:34.807 16:33:12 -- scripts/common.sh@352 -- # local d=2 00:13:34.807 16:33:12 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:34.807 16:33:12 -- scripts/common.sh@354 -- # echo 2 00:13:34.807 16:33:12 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:34.807 16:33:12 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:34.807 16:33:12 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:34.807 16:33:12 -- scripts/common.sh@367 -- # return 0 00:13:34.807 16:33:12 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:34.807 16:33:12 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:34.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:34.807 --rc genhtml_branch_coverage=1 00:13:34.807 --rc genhtml_function_coverage=1 00:13:34.807 --rc genhtml_legend=1 00:13:34.807 --rc geninfo_all_blocks=1 00:13:34.807 --rc geninfo_unexecuted_blocks=1 00:13:34.807 00:13:34.807 ' 00:13:34.807 16:33:12 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:34.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:34.807 --rc genhtml_branch_coverage=1 00:13:34.807 --rc genhtml_function_coverage=1 00:13:34.807 --rc genhtml_legend=1 00:13:34.807 --rc geninfo_all_blocks=1 00:13:34.807 --rc geninfo_unexecuted_blocks=1 00:13:34.807 00:13:34.807 ' 00:13:34.807 16:33:12 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:34.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:34.807 --rc genhtml_branch_coverage=1 00:13:34.807 --rc genhtml_function_coverage=1 00:13:34.807 --rc genhtml_legend=1 00:13:34.807 --rc geninfo_all_blocks=1 00:13:34.807 --rc geninfo_unexecuted_blocks=1 00:13:34.807 00:13:34.807 ' 00:13:34.807 16:33:12 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:34.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:34.807 --rc genhtml_branch_coverage=1 00:13:34.807 --rc genhtml_function_coverage=1 00:13:34.807 --rc genhtml_legend=1 00:13:34.807 --rc geninfo_all_blocks=1 00:13:34.807 --rc geninfo_unexecuted_blocks=1 00:13:34.807 00:13:34.807 ' 00:13:34.807 16:33:12 -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:34.807 16:33:12 -- nvmf/common.sh@7 -- # uname -s 00:13:34.807 16:33:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:34.807 16:33:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:34.807 16:33:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:34.807 16:33:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:34.807 16:33:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:34.807 16:33:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:34.807 16:33:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:34.807 16:33:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:34.807 16:33:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:34.807 16:33:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:34.807 16:33:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:13:34.807 16:33:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:13:34.807 16:33:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:34.807 16:33:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:34.807 16:33:12 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:34.807 16:33:12 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:34.807 16:33:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:34.807 16:33:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:34.807 16:33:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:34.807 16:33:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.807 16:33:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.807 16:33:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.807 16:33:12 -- paths/export.sh@5 -- # export PATH 00:13:34.807 16:33:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.807 16:33:12 -- nvmf/common.sh@46 -- # : 0 00:13:34.807 16:33:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:34.807 16:33:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:34.807 16:33:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:34.807 16:33:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:34.807 16:33:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:34.807 16:33:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:34.807 16:33:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:34.807 16:33:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:34.807 16:33:12 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:13:34.807 16:33:12 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:34.807 16:33:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:34.807 16:33:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:34.807 16:33:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:34.807 16:33:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:34.807 16:33:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:34.807 16:33:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:34.807 16:33:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:34.807 16:33:12 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:34.807 16:33:12 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:34.807 16:33:12 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:34.807 16:33:12 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:34.807 16:33:12 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:34.807 16:33:12 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:34.807 16:33:12 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:34.807 16:33:12 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:34.807 16:33:12 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:34.807 16:33:12 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:34.807 16:33:12 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:34.807 16:33:12 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:34.807 16:33:12 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:34.807 16:33:12 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:34.807 16:33:12 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:34.807 16:33:12 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:34.807 16:33:12 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:34.807 16:33:12 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:34.807 16:33:12 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:34.807 16:33:12 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:34.807 Cannot find device "nvmf_tgt_br" 00:13:34.807 16:33:12 -- nvmf/common.sh@154 -- # true 00:13:34.807 16:33:12 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:34.807 Cannot find device "nvmf_tgt_br2" 00:13:34.807 16:33:12 -- nvmf/common.sh@155 -- # true 00:13:34.807 16:33:12 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:34.807 16:33:12 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:35.066 Cannot find device "nvmf_tgt_br" 00:13:35.066 16:33:12 -- nvmf/common.sh@157 -- # true 00:13:35.066 16:33:12 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:35.066 Cannot find device "nvmf_tgt_br2" 00:13:35.066 16:33:12 -- nvmf/common.sh@158 -- # true 00:13:35.066 16:33:12 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:35.066 16:33:12 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:35.066 16:33:12 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:35.066 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:35.066 16:33:12 -- nvmf/common.sh@161 -- # true 00:13:35.066 16:33:12 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:35.066 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:35.066 16:33:12 -- nvmf/common.sh@162 -- # true 00:13:35.066 16:33:12 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:35.066 16:33:12 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:35.066 16:33:12 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:35.066 16:33:12 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:35.066 16:33:12 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:35.066 16:33:12 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:35.066 16:33:12 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:35.066 16:33:12 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:35.066 16:33:12 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:35.066 16:33:12 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:35.066 16:33:12 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:35.066 16:33:12 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:35.066 16:33:12 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:35.066 16:33:12 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:35.066 16:33:12 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:35.066 16:33:12 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:35.066 16:33:12 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:35.066 16:33:12 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:35.066 16:33:12 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:35.066 16:33:12 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:35.066 16:33:12 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:35.066 16:33:12 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:35.325 16:33:12 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:35.325 16:33:12 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:35.325 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:35.325 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:13:35.325 00:13:35.325 --- 10.0.0.2 ping statistics --- 00:13:35.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:35.325 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:13:35.325 16:33:12 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:35.325 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:35.325 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:13:35.325 00:13:35.325 --- 10.0.0.3 ping statistics --- 00:13:35.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:35.325 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:13:35.325 16:33:12 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:35.325 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:35.325 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:13:35.325 00:13:35.325 --- 10.0.0.1 ping statistics --- 00:13:35.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:35.325 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:13:35.325 16:33:12 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:35.325 16:33:12 -- nvmf/common.sh@421 -- # return 0 00:13:35.325 16:33:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:35.325 16:33:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:35.325 16:33:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:35.325 16:33:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:35.325 16:33:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:35.325 16:33:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:35.325 16:33:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:35.325 16:33:12 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:13:35.325 16:33:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:35.325 16:33:12 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:35.325 16:33:12 -- common/autotest_common.sh@10 -- # set +x 00:13:35.325 16:33:12 -- nvmf/common.sh@469 -- # nvmfpid=82570 00:13:35.325 16:33:12 -- nvmf/common.sh@470 -- # waitforlisten 82570 00:13:35.325 16:33:12 -- common/autotest_common.sh@829 -- # '[' -z 82570 ']' 00:13:35.326 16:33:12 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:13:35.326 16:33:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:35.326 16:33:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:35.326 16:33:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:35.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:35.326 16:33:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:35.326 16:33:12 -- common/autotest_common.sh@10 -- # set +x 00:13:35.326 [2024-11-16 16:33:12.645960] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:35.326 [2024-11-16 16:33:12.646044] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:35.326 [2024-11-16 16:33:12.785637] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:35.584 [2024-11-16 16:33:12.855876] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:35.584 [2024-11-16 16:33:12.856022] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:35.584 [2024-11-16 16:33:12.856035] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:35.584 [2024-11-16 16:33:12.856043] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:35.584 [2024-11-16 16:33:12.856160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:35.584 [2024-11-16 16:33:12.856521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:36.153 16:33:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:36.153 16:33:13 -- common/autotest_common.sh@862 -- # return 0 00:13:36.153 16:33:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:36.153 16:33:13 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:36.153 16:33:13 -- common/autotest_common.sh@10 -- # set +x 00:13:36.412 16:33:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:36.412 16:33:13 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:36.412 16:33:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.412 16:33:13 -- common/autotest_common.sh@10 -- # set +x 00:13:36.412 [2024-11-16 16:33:13.681273] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:36.412 16:33:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.412 16:33:13 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:36.412 16:33:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.412 16:33:13 -- common/autotest_common.sh@10 -- # set +x 00:13:36.412 16:33:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.412 16:33:13 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:36.412 16:33:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.412 16:33:13 -- common/autotest_common.sh@10 -- # set +x 00:13:36.412 [2024-11-16 16:33:13.697442] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:36.412 16:33:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.412 16:33:13 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:36.412 16:33:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.412 16:33:13 -- common/autotest_common.sh@10 -- # set +x 00:13:36.412 NULL1 00:13:36.412 16:33:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.412 16:33:13 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:36.412 16:33:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.412 16:33:13 -- common/autotest_common.sh@10 -- # set +x 00:13:36.412 Delay0 00:13:36.412 16:33:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.412 16:33:13 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:36.412 16:33:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.412 16:33:13 -- common/autotest_common.sh@10 -- # set +x 00:13:36.412 16:33:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.412 16:33:13 -- target/delete_subsystem.sh@28 -- # perf_pid=82622 00:13:36.412 16:33:13 -- target/delete_subsystem.sh@30 -- # sleep 2 00:13:36.412 16:33:13 -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:13:36.412 [2024-11-16 16:33:13.891941] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:38.316 16:33:15 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:38.316 16:33:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.316 16:33:15 -- common/autotest_common.sh@10 -- # set +x 00:13:38.576 Write completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Write completed with error (sct=0, sc=8) 00:13:38.576 starting I/O failed: -6 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 starting I/O failed: -6 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Write completed with error (sct=0, sc=8) 00:13:38.576 Write completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 starting I/O failed: -6 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 starting I/O failed: -6 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 starting I/O failed: -6 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Write completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 starting I/O failed: -6 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 starting I/O failed: -6 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Write completed with error (sct=0, sc=8) 00:13:38.576 starting I/O failed: -6 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Write completed with error (sct=0, sc=8) 00:13:38.576 Write completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 starting I/O failed: -6 00:13:38.576 Write completed with error (sct=0, sc=8) 00:13:38.576 Write completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Write completed with error (sct=0, sc=8) 00:13:38.576 starting I/O failed: -6 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Write completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Write completed with error (sct=0, sc=8) 00:13:38.576 starting I/O failed: -6 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Write completed with error (sct=0, sc=8) 00:13:38.576 Write completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 starting I/O failed: -6 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 [2024-11-16 16:33:15.934777] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x838870 is same with the state(5) to be set 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Write completed with error (sct=0, sc=8) 00:13:38.576 Write completed with error (sct=0, sc=8) 00:13:38.576 Write completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Write completed with error (sct=0, sc=8) 00:13:38.576 Write completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Write completed with error (sct=0, sc=8) 00:13:38.576 Write completed with error (sct=0, sc=8) 00:13:38.576 Write completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Write completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Write completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Write completed with error (sct=0, sc=8) 00:13:38.576 Write completed with error (sct=0, sc=8) 00:13:38.576 Write completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Write completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Write completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Write completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Write completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Write completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Write completed with error (sct=0, sc=8) 00:13:38.576 Write completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 starting I/O failed: -6 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 starting I/O failed: -6 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Write completed with error (sct=0, sc=8) 00:13:38.576 Write completed with error (sct=0, sc=8) 00:13:38.576 starting I/O failed: -6 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 starting I/O failed: -6 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Write completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 starting I/O failed: -6 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Write completed with error (sct=0, sc=8) 00:13:38.576 starting I/O failed: -6 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Write completed with error (sct=0, sc=8) 00:13:38.576 Write completed with error (sct=0, sc=8) 00:13:38.576 Write completed with error (sct=0, sc=8) 00:13:38.576 starting I/O failed: -6 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 starting I/O failed: -6 00:13:38.576 Write completed with error (sct=0, sc=8) 00:13:38.576 Write completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 starting I/O failed: -6 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Write completed with error (sct=0, sc=8) 00:13:38.576 Write completed with error (sct=0, sc=8) 00:13:38.576 starting I/O failed: -6 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 starting I/O failed: -6 00:13:38.576 Write completed with error (sct=0, sc=8) 00:13:38.576 Write completed with error (sct=0, sc=8) 00:13:38.576 [2024-11-16 16:33:15.936196] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9be8000c00 is same with the state(5) to be set 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Write completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Read completed with error (sct=0, sc=8) 00:13:38.576 Write completed with error (sct=0, sc=8) 00:13:38.577 Read completed with error (sct=0, sc=8) 00:13:38.577 Write completed with error (sct=0, sc=8) 00:13:38.577 Write completed with error (sct=0, sc=8) 00:13:38.577 Read completed with error (sct=0, sc=8) 00:13:38.577 Read completed with error (sct=0, sc=8) 00:13:38.577 Write completed with error (sct=0, sc=8) 00:13:38.577 Read completed with error (sct=0, sc=8) 00:13:38.577 Read completed with error (sct=0, sc=8) 00:13:38.577 Write completed with error (sct=0, sc=8) 00:13:38.577 Read completed with error (sct=0, sc=8) 00:13:38.577 Read completed with error (sct=0, sc=8) 00:13:38.577 Read completed with error (sct=0, sc=8) 00:13:38.577 Read completed with error (sct=0, sc=8) 00:13:38.577 Read completed with error (sct=0, sc=8) 00:13:38.577 Write completed with error (sct=0, sc=8) 00:13:38.577 Write completed with error (sct=0, sc=8) 00:13:38.577 Read completed with error (sct=0, sc=8) 00:13:38.577 Read completed with error (sct=0, sc=8) 00:13:38.577 Write completed with error (sct=0, sc=8) 00:13:38.577 Write completed with error (sct=0, sc=8) 00:13:38.577 Read completed with error (sct=0, sc=8) 00:13:38.577 Read completed with error (sct=0, sc=8) 00:13:38.577 Write completed with error (sct=0, sc=8) 00:13:38.577 Write completed with error (sct=0, sc=8) 00:13:38.577 Read completed with error (sct=0, sc=8) 00:13:38.577 Read completed with error (sct=0, sc=8) 00:13:38.577 Read completed with error (sct=0, sc=8) 00:13:38.577 Write completed with error (sct=0, sc=8) 00:13:38.577 Write completed with error (sct=0, sc=8) 00:13:38.577 Read completed with error (sct=0, sc=8) 00:13:38.577 Read completed with error (sct=0, sc=8) 00:13:38.577 Read completed with error (sct=0, sc=8) 00:13:38.577 Read completed with error (sct=0, sc=8) 00:13:38.577 Read completed with error (sct=0, sc=8) 00:13:38.577 Read completed with error (sct=0, sc=8) 00:13:38.577 Write completed with error (sct=0, sc=8) 00:13:38.577 Write completed with error (sct=0, sc=8) 00:13:38.577 Read completed with error (sct=0, sc=8) 00:13:38.577 Read completed with error (sct=0, sc=8) 00:13:38.577 Read completed with error (sct=0, sc=8) 00:13:38.577 Read completed with error (sct=0, sc=8) 00:13:38.577 Read completed with error (sct=0, sc=8) 00:13:38.577 Write completed with error (sct=0, sc=8) 00:13:38.577 Read completed with error (sct=0, sc=8) 00:13:38.577 Read completed with error (sct=0, sc=8) 00:13:38.577 Write completed with error (sct=0, sc=8) 00:13:38.577 Write completed with error (sct=0, sc=8) 00:13:38.577 Read completed with error (sct=0, sc=8) 00:13:38.577 Read completed with error (sct=0, sc=8) 00:13:39.575 [2024-11-16 16:33:16.905206] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x837070 is same with the state(5) to be set 00:13:39.575 Read completed with error (sct=0, sc=8) 00:13:39.575 Read completed with error (sct=0, sc=8) 00:13:39.575 Write completed with error (sct=0, sc=8) 00:13:39.575 Read completed with error (sct=0, sc=8) 00:13:39.575 Read completed with error (sct=0, sc=8) 00:13:39.575 Read completed with error (sct=0, sc=8) 00:13:39.575 Write completed with error (sct=0, sc=8) 00:13:39.575 Read completed with error (sct=0, sc=8) 00:13:39.575 Read completed with error (sct=0, sc=8) 00:13:39.575 Write completed with error (sct=0, sc=8) 00:13:39.575 Read completed with error (sct=0, sc=8) 00:13:39.575 Write completed with error (sct=0, sc=8) 00:13:39.575 Read completed with error (sct=0, sc=8) 00:13:39.575 Write completed with error (sct=0, sc=8) 00:13:39.575 Read completed with error (sct=0, sc=8) 00:13:39.575 Write completed with error (sct=0, sc=8) 00:13:39.575 Write completed with error (sct=0, sc=8) 00:13:39.575 Write completed with error (sct=0, sc=8) 00:13:39.575 Read completed with error (sct=0, sc=8) 00:13:39.575 Read completed with error (sct=0, sc=8) 00:13:39.575 Read completed with error (sct=0, sc=8) 00:13:39.575 Write completed with error (sct=0, sc=8) 00:13:39.575 [2024-11-16 16:33:16.937095] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9be800c600 is same with the state(5) to be set 00:13:39.575 Write completed with error (sct=0, sc=8) 00:13:39.575 Read completed with error (sct=0, sc=8) 00:13:39.575 Read completed with error (sct=0, sc=8) 00:13:39.575 Write completed with error (sct=0, sc=8) 00:13:39.575 Write completed with error (sct=0, sc=8) 00:13:39.575 Read completed with error (sct=0, sc=8) 00:13:39.575 Write completed with error (sct=0, sc=8) 00:13:39.575 Read completed with error (sct=0, sc=8) 00:13:39.575 Read completed with error (sct=0, sc=8) 00:13:39.575 Read completed with error (sct=0, sc=8) 00:13:39.575 Write completed with error (sct=0, sc=8) 00:13:39.575 Write completed with error (sct=0, sc=8) 00:13:39.575 Read completed with error (sct=0, sc=8) 00:13:39.575 Write completed with error (sct=0, sc=8) 00:13:39.575 Write completed with error (sct=0, sc=8) 00:13:39.575 Read completed with error (sct=0, sc=8) 00:13:39.575 Read completed with error (sct=0, sc=8) 00:13:39.575 Read completed with error (sct=0, sc=8) 00:13:39.575 Read completed with error (sct=0, sc=8) 00:13:39.575 Read completed with error (sct=0, sc=8) 00:13:39.575 Read completed with error (sct=0, sc=8) 00:13:39.575 Read completed with error (sct=0, sc=8) 00:13:39.575 Write completed with error (sct=0, sc=8) 00:13:39.575 [2024-11-16 16:33:16.937259] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9be800bf20 is same with the state(5) to be set 00:13:39.575 Read completed with error (sct=0, sc=8) 00:13:39.575 Write completed with error (sct=0, sc=8) 00:13:39.575 Read completed with error (sct=0, sc=8) 00:13:39.575 Read completed with error (sct=0, sc=8) 00:13:39.575 Read completed with error (sct=0, sc=8) 00:13:39.575 Write completed with error (sct=0, sc=8) 00:13:39.575 Read completed with error (sct=0, sc=8) 00:13:39.575 Write completed with error (sct=0, sc=8) 00:13:39.575 Read completed with error (sct=0, sc=8) 00:13:39.575 Read completed with error (sct=0, sc=8) 00:13:39.575 Write completed with error (sct=0, sc=8) 00:13:39.575 Read completed with error (sct=0, sc=8) 00:13:39.575 Read completed with error (sct=0, sc=8) 00:13:39.575 Write completed with error (sct=0, sc=8) 00:13:39.575 Read completed with error (sct=0, sc=8) 00:13:39.575 Write completed with error (sct=0, sc=8) 00:13:39.575 Write completed with error (sct=0, sc=8) 00:13:39.575 Read completed with error (sct=0, sc=8) 00:13:39.575 Read completed with error (sct=0, sc=8) 00:13:39.575 Write completed with error (sct=0, sc=8) 00:13:39.575 Read completed with error (sct=0, sc=8) 00:13:39.575 Read completed with error (sct=0, sc=8) 00:13:39.575 Read completed with error (sct=0, sc=8) 00:13:39.575 Read completed with error (sct=0, sc=8) 00:13:39.575 Read completed with error (sct=0, sc=8) 00:13:39.575 Read completed with error (sct=0, sc=8) 00:13:39.575 Read completed with error (sct=0, sc=8) 00:13:39.575 Read completed with error (sct=0, sc=8) 00:13:39.575 [2024-11-16 16:33:16.937851] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x838bc0 is same with the state(5) to be set 00:13:39.575 Read completed with error (sct=0, sc=8) 00:13:39.575 Read completed with error (sct=0, sc=8) 00:13:39.575 Write completed with error (sct=0, sc=8) 00:13:39.575 Read completed with error (sct=0, sc=8) 00:13:39.575 Write completed with error (sct=0, sc=8) 00:13:39.575 Read completed with error (sct=0, sc=8) 00:13:39.575 Read completed with error (sct=0, sc=8) 00:13:39.575 Read completed with error (sct=0, sc=8) 00:13:39.575 Read completed with error (sct=0, sc=8) 00:13:39.575 Write completed with error (sct=0, sc=8) 00:13:39.575 Read completed with error (sct=0, sc=8) 00:13:39.575 Read completed with error (sct=0, sc=8) 00:13:39.575 Read completed with error (sct=0, sc=8) 00:13:39.575 Read completed with error (sct=0, sc=8) 00:13:39.575 Read completed with error (sct=0, sc=8) 00:13:39.575 Read completed with error (sct=0, sc=8) 00:13:39.575 Read completed with error (sct=0, sc=8) 00:13:39.575 Read completed with error (sct=0, sc=8) 00:13:39.575 Write completed with error (sct=0, sc=8) 00:13:39.575 Write completed with error (sct=0, sc=8) 00:13:39.576 Write completed with error (sct=0, sc=8) 00:13:39.576 Read completed with error (sct=0, sc=8) 00:13:39.576 Read completed with error (sct=0, sc=8) 00:13:39.576 Read completed with error (sct=0, sc=8) 00:13:39.576 Read completed with error (sct=0, sc=8) 00:13:39.576 Read completed with error (sct=0, sc=8) 00:13:39.576 Read completed with error (sct=0, sc=8) 00:13:39.576 [2024-11-16 16:33:16.938575] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x839120 is same with the state(5) to be set 00:13:39.576 [2024-11-16 16:33:16.938962] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x837070 (9): Bad file descriptor 00:13:39.576 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:13:39.576 16:33:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.576 16:33:16 -- target/delete_subsystem.sh@34 -- # delay=0 00:13:39.576 16:33:16 -- target/delete_subsystem.sh@35 -- # kill -0 82622 00:13:39.576 16:33:16 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:13:39.576 Initializing NVMe Controllers 00:13:39.576 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:39.576 Controller IO queue size 128, less than required. 00:13:39.576 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:39.576 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:13:39.576 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:13:39.576 Initialization complete. Launching workers. 00:13:39.576 ======================================================== 00:13:39.576 Latency(us) 00:13:39.576 Device Information : IOPS MiB/s Average min max 00:13:39.576 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 175.26 0.09 886388.16 994.32 1017604.15 00:13:39.576 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 166.34 0.08 904044.20 292.60 1020470.14 00:13:39.576 ======================================================== 00:13:39.576 Total : 341.60 0.17 894985.88 292.60 1020470.14 00:13:39.576 00:13:40.151 16:33:17 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:13:40.151 16:33:17 -- target/delete_subsystem.sh@35 -- # kill -0 82622 00:13:40.151 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (82622) - No such process 00:13:40.151 16:33:17 -- target/delete_subsystem.sh@45 -- # NOT wait 82622 00:13:40.151 16:33:17 -- common/autotest_common.sh@650 -- # local es=0 00:13:40.151 16:33:17 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 82622 00:13:40.151 16:33:17 -- common/autotest_common.sh@638 -- # local arg=wait 00:13:40.151 16:33:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:40.151 16:33:17 -- common/autotest_common.sh@642 -- # type -t wait 00:13:40.151 16:33:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:40.151 16:33:17 -- common/autotest_common.sh@653 -- # wait 82622 00:13:40.151 16:33:17 -- common/autotest_common.sh@653 -- # es=1 00:13:40.151 16:33:17 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:40.151 16:33:17 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:40.151 16:33:17 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:40.151 16:33:17 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:40.151 16:33:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.151 16:33:17 -- common/autotest_common.sh@10 -- # set +x 00:13:40.151 16:33:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.151 16:33:17 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:40.151 16:33:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.151 16:33:17 -- common/autotest_common.sh@10 -- # set +x 00:13:40.151 [2024-11-16 16:33:17.459454] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:40.151 16:33:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.151 16:33:17 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:40.151 16:33:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.151 16:33:17 -- common/autotest_common.sh@10 -- # set +x 00:13:40.151 16:33:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.151 16:33:17 -- target/delete_subsystem.sh@54 -- # perf_pid=82668 00:13:40.151 16:33:17 -- target/delete_subsystem.sh@56 -- # delay=0 00:13:40.151 16:33:17 -- target/delete_subsystem.sh@57 -- # kill -0 82668 00:13:40.151 16:33:17 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:40.151 16:33:17 -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:13:40.151 [2024-11-16 16:33:17.629762] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:40.719 16:33:17 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:40.719 16:33:17 -- target/delete_subsystem.sh@57 -- # kill -0 82668 00:13:40.719 16:33:17 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:41.287 16:33:18 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:41.287 16:33:18 -- target/delete_subsystem.sh@57 -- # kill -0 82668 00:13:41.287 16:33:18 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:41.546 16:33:18 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:41.546 16:33:18 -- target/delete_subsystem.sh@57 -- # kill -0 82668 00:13:41.546 16:33:18 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:42.114 16:33:19 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:42.114 16:33:19 -- target/delete_subsystem.sh@57 -- # kill -0 82668 00:13:42.114 16:33:19 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:42.682 16:33:19 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:42.682 16:33:19 -- target/delete_subsystem.sh@57 -- # kill -0 82668 00:13:42.682 16:33:19 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:43.250 16:33:20 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:43.250 16:33:20 -- target/delete_subsystem.sh@57 -- # kill -0 82668 00:13:43.250 16:33:20 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:43.250 Initializing NVMe Controllers 00:13:43.250 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:43.250 Controller IO queue size 128, less than required. 00:13:43.250 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:43.250 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:13:43.250 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:13:43.250 Initialization complete. Launching workers. 00:13:43.250 ======================================================== 00:13:43.250 Latency(us) 00:13:43.250 Device Information : IOPS MiB/s Average min max 00:13:43.250 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004061.03 1000153.87 1017612.62 00:13:43.250 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1007095.51 1000232.36 1041402.03 00:13:43.250 ======================================================== 00:13:43.250 Total : 256.00 0.12 1005578.27 1000153.87 1041402.03 00:13:43.250 00:13:43.509 16:33:20 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:43.509 16:33:20 -- target/delete_subsystem.sh@57 -- # kill -0 82668 00:13:43.509 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (82668) - No such process 00:13:43.509 16:33:20 -- target/delete_subsystem.sh@67 -- # wait 82668 00:13:43.767 16:33:20 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:13:43.768 16:33:20 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:13:43.768 16:33:20 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:43.768 16:33:20 -- nvmf/common.sh@116 -- # sync 00:13:43.768 16:33:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:43.768 16:33:21 -- nvmf/common.sh@119 -- # set +e 00:13:43.768 16:33:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:43.768 16:33:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:43.768 rmmod nvme_tcp 00:13:43.768 rmmod nvme_fabrics 00:13:43.768 rmmod nvme_keyring 00:13:43.768 16:33:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:43.768 16:33:21 -- nvmf/common.sh@123 -- # set -e 00:13:43.768 16:33:21 -- nvmf/common.sh@124 -- # return 0 00:13:43.768 16:33:21 -- nvmf/common.sh@477 -- # '[' -n 82570 ']' 00:13:43.768 16:33:21 -- nvmf/common.sh@478 -- # killprocess 82570 00:13:43.768 16:33:21 -- common/autotest_common.sh@936 -- # '[' -z 82570 ']' 00:13:43.768 16:33:21 -- common/autotest_common.sh@940 -- # kill -0 82570 00:13:43.768 16:33:21 -- common/autotest_common.sh@941 -- # uname 00:13:43.768 16:33:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:43.768 16:33:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82570 00:13:43.768 killing process with pid 82570 00:13:43.768 16:33:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:43.768 16:33:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:43.768 16:33:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82570' 00:13:43.768 16:33:21 -- common/autotest_common.sh@955 -- # kill 82570 00:13:43.768 16:33:21 -- common/autotest_common.sh@960 -- # wait 82570 00:13:44.026 16:33:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:44.026 16:33:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:44.026 16:33:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:44.026 16:33:21 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:44.026 16:33:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:44.026 16:33:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:44.026 16:33:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:44.026 16:33:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:44.026 16:33:21 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:44.026 00:13:44.026 real 0m9.392s 00:13:44.026 user 0m29.312s 00:13:44.026 sys 0m1.169s 00:13:44.026 16:33:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:44.026 16:33:21 -- common/autotest_common.sh@10 -- # set +x 00:13:44.026 ************************************ 00:13:44.026 END TEST nvmf_delete_subsystem 00:13:44.026 ************************************ 00:13:44.026 16:33:21 -- nvmf/nvmf.sh@36 -- # [[ 0 -eq 1 ]] 00:13:44.026 16:33:21 -- nvmf/nvmf.sh@39 -- # [[ 0 -eq 1 ]] 00:13:44.026 16:33:21 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:44.026 16:33:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:44.026 16:33:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:44.026 16:33:21 -- common/autotest_common.sh@10 -- # set +x 00:13:44.026 ************************************ 00:13:44.026 START TEST nvmf_host_management 00:13:44.026 ************************************ 00:13:44.026 16:33:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:44.286 * Looking for test storage... 00:13:44.286 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:44.286 16:33:21 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:44.286 16:33:21 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:44.286 16:33:21 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:44.286 16:33:21 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:44.286 16:33:21 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:44.286 16:33:21 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:44.286 16:33:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:44.286 16:33:21 -- scripts/common.sh@335 -- # IFS=.-: 00:13:44.286 16:33:21 -- scripts/common.sh@335 -- # read -ra ver1 00:13:44.286 16:33:21 -- scripts/common.sh@336 -- # IFS=.-: 00:13:44.286 16:33:21 -- scripts/common.sh@336 -- # read -ra ver2 00:13:44.286 16:33:21 -- scripts/common.sh@337 -- # local 'op=<' 00:13:44.286 16:33:21 -- scripts/common.sh@339 -- # ver1_l=2 00:13:44.286 16:33:21 -- scripts/common.sh@340 -- # ver2_l=1 00:13:44.286 16:33:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:44.286 16:33:21 -- scripts/common.sh@343 -- # case "$op" in 00:13:44.286 16:33:21 -- scripts/common.sh@344 -- # : 1 00:13:44.286 16:33:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:44.286 16:33:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:44.286 16:33:21 -- scripts/common.sh@364 -- # decimal 1 00:13:44.286 16:33:21 -- scripts/common.sh@352 -- # local d=1 00:13:44.286 16:33:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:44.286 16:33:21 -- scripts/common.sh@354 -- # echo 1 00:13:44.286 16:33:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:44.286 16:33:21 -- scripts/common.sh@365 -- # decimal 2 00:13:44.286 16:33:21 -- scripts/common.sh@352 -- # local d=2 00:13:44.286 16:33:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:44.286 16:33:21 -- scripts/common.sh@354 -- # echo 2 00:13:44.286 16:33:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:44.286 16:33:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:44.286 16:33:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:44.286 16:33:21 -- scripts/common.sh@367 -- # return 0 00:13:44.286 16:33:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:44.286 16:33:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:44.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:44.286 --rc genhtml_branch_coverage=1 00:13:44.286 --rc genhtml_function_coverage=1 00:13:44.286 --rc genhtml_legend=1 00:13:44.286 --rc geninfo_all_blocks=1 00:13:44.286 --rc geninfo_unexecuted_blocks=1 00:13:44.286 00:13:44.286 ' 00:13:44.286 16:33:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:44.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:44.286 --rc genhtml_branch_coverage=1 00:13:44.286 --rc genhtml_function_coverage=1 00:13:44.286 --rc genhtml_legend=1 00:13:44.286 --rc geninfo_all_blocks=1 00:13:44.286 --rc geninfo_unexecuted_blocks=1 00:13:44.286 00:13:44.286 ' 00:13:44.286 16:33:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:44.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:44.286 --rc genhtml_branch_coverage=1 00:13:44.286 --rc genhtml_function_coverage=1 00:13:44.286 --rc genhtml_legend=1 00:13:44.286 --rc geninfo_all_blocks=1 00:13:44.286 --rc geninfo_unexecuted_blocks=1 00:13:44.286 00:13:44.286 ' 00:13:44.286 16:33:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:44.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:44.286 --rc genhtml_branch_coverage=1 00:13:44.286 --rc genhtml_function_coverage=1 00:13:44.286 --rc genhtml_legend=1 00:13:44.286 --rc geninfo_all_blocks=1 00:13:44.286 --rc geninfo_unexecuted_blocks=1 00:13:44.286 00:13:44.286 ' 00:13:44.286 16:33:21 -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:44.286 16:33:21 -- nvmf/common.sh@7 -- # uname -s 00:13:44.286 16:33:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:44.286 16:33:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:44.286 16:33:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:44.287 16:33:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:44.287 16:33:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:44.287 16:33:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:44.287 16:33:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:44.287 16:33:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:44.287 16:33:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:44.287 16:33:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:44.287 16:33:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:13:44.287 16:33:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:13:44.287 16:33:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:44.287 16:33:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:44.287 16:33:21 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:44.287 16:33:21 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:44.287 16:33:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:44.287 16:33:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:44.287 16:33:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:44.287 16:33:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.287 16:33:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.287 16:33:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.287 16:33:21 -- paths/export.sh@5 -- # export PATH 00:13:44.287 16:33:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.287 16:33:21 -- nvmf/common.sh@46 -- # : 0 00:13:44.287 16:33:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:44.287 16:33:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:44.287 16:33:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:44.287 16:33:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:44.287 16:33:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:44.287 16:33:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:44.287 16:33:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:44.287 16:33:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:44.287 16:33:21 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:44.287 16:33:21 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:44.287 16:33:21 -- target/host_management.sh@104 -- # nvmftestinit 00:13:44.287 16:33:21 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:44.287 16:33:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:44.287 16:33:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:44.287 16:33:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:44.287 16:33:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:44.287 16:33:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:44.287 16:33:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:44.287 16:33:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:44.287 16:33:21 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:44.287 16:33:21 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:44.287 16:33:21 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:44.287 16:33:21 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:44.287 16:33:21 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:44.287 16:33:21 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:44.287 16:33:21 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:44.287 16:33:21 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:44.287 16:33:21 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:44.287 16:33:21 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:44.287 16:33:21 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:44.287 16:33:21 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:44.287 16:33:21 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:44.287 16:33:21 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:44.287 16:33:21 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:44.287 16:33:21 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:44.287 16:33:21 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:44.287 16:33:21 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:44.287 16:33:21 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:44.287 16:33:21 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:44.287 Cannot find device "nvmf_tgt_br" 00:13:44.287 16:33:21 -- nvmf/common.sh@154 -- # true 00:13:44.287 16:33:21 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:44.287 Cannot find device "nvmf_tgt_br2" 00:13:44.287 16:33:21 -- nvmf/common.sh@155 -- # true 00:13:44.287 16:33:21 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:44.287 16:33:21 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:44.287 Cannot find device "nvmf_tgt_br" 00:13:44.287 16:33:21 -- nvmf/common.sh@157 -- # true 00:13:44.287 16:33:21 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:44.287 Cannot find device "nvmf_tgt_br2" 00:13:44.287 16:33:21 -- nvmf/common.sh@158 -- # true 00:13:44.287 16:33:21 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:44.287 16:33:21 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:44.546 16:33:21 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:44.546 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:44.546 16:33:21 -- nvmf/common.sh@161 -- # true 00:13:44.546 16:33:21 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:44.546 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:44.546 16:33:21 -- nvmf/common.sh@162 -- # true 00:13:44.546 16:33:21 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:44.546 16:33:21 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:44.546 16:33:21 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:44.546 16:33:21 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:44.546 16:33:21 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:44.546 16:33:21 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:44.546 16:33:21 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:44.546 16:33:21 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:44.546 16:33:21 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:44.546 16:33:21 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:44.546 16:33:21 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:44.546 16:33:21 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:44.546 16:33:21 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:44.546 16:33:21 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:44.546 16:33:21 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:44.546 16:33:21 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:44.546 16:33:21 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:44.546 16:33:21 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:44.546 16:33:21 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:44.546 16:33:21 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:44.546 16:33:21 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:44.546 16:33:21 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:44.546 16:33:21 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:44.546 16:33:21 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:44.546 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:44.546 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:13:44.546 00:13:44.546 --- 10.0.0.2 ping statistics --- 00:13:44.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:44.546 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:13:44.546 16:33:21 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:44.546 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:44.546 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:13:44.546 00:13:44.546 --- 10.0.0.3 ping statistics --- 00:13:44.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:44.546 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:13:44.546 16:33:21 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:44.546 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:44.546 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:13:44.546 00:13:44.546 --- 10.0.0.1 ping statistics --- 00:13:44.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:44.546 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:13:44.546 16:33:21 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:44.546 16:33:21 -- nvmf/common.sh@421 -- # return 0 00:13:44.546 16:33:21 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:44.546 16:33:21 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:44.546 16:33:21 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:44.546 16:33:21 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:44.546 16:33:21 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:44.547 16:33:21 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:44.547 16:33:21 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:44.547 16:33:21 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:13:44.547 16:33:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:44.547 16:33:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:44.547 16:33:21 -- common/autotest_common.sh@10 -- # set +x 00:13:44.547 ************************************ 00:13:44.547 START TEST nvmf_host_management 00:13:44.547 ************************************ 00:13:44.547 16:33:22 -- common/autotest_common.sh@1114 -- # nvmf_host_management 00:13:44.547 16:33:22 -- target/host_management.sh@69 -- # starttarget 00:13:44.547 16:33:22 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:13:44.547 16:33:22 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:44.547 16:33:22 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:44.547 16:33:22 -- common/autotest_common.sh@10 -- # set +x 00:13:44.547 16:33:22 -- nvmf/common.sh@469 -- # nvmfpid=82913 00:13:44.547 16:33:22 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:13:44.547 16:33:22 -- nvmf/common.sh@470 -- # waitforlisten 82913 00:13:44.547 16:33:22 -- common/autotest_common.sh@829 -- # '[' -z 82913 ']' 00:13:44.547 16:33:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:44.547 16:33:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:44.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:44.547 16:33:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:44.547 16:33:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:44.547 16:33:22 -- common/autotest_common.sh@10 -- # set +x 00:13:44.806 [2024-11-16 16:33:22.047406] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:44.806 [2024-11-16 16:33:22.047499] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:44.806 [2024-11-16 16:33:22.179429] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:44.806 [2024-11-16 16:33:22.241507] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:44.806 [2024-11-16 16:33:22.241933] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:44.806 [2024-11-16 16:33:22.241984] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:44.806 [2024-11-16 16:33:22.242105] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:44.806 [2024-11-16 16:33:22.242583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:44.806 [2024-11-16 16:33:22.242770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:44.807 [2024-11-16 16:33:22.242831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:13:44.807 [2024-11-16 16:33:22.242836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:45.744 16:33:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:45.744 16:33:23 -- common/autotest_common.sh@862 -- # return 0 00:13:45.744 16:33:23 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:45.744 16:33:23 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:45.744 16:33:23 -- common/autotest_common.sh@10 -- # set +x 00:13:45.744 16:33:23 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:45.744 16:33:23 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:45.744 16:33:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.744 16:33:23 -- common/autotest_common.sh@10 -- # set +x 00:13:45.744 [2024-11-16 16:33:23.161188] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:45.744 16:33:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.744 16:33:23 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:13:45.744 16:33:23 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:45.744 16:33:23 -- common/autotest_common.sh@10 -- # set +x 00:13:45.744 16:33:23 -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:13:45.744 16:33:23 -- target/host_management.sh@23 -- # cat 00:13:45.744 16:33:23 -- target/host_management.sh@30 -- # rpc_cmd 00:13:45.744 16:33:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.744 16:33:23 -- common/autotest_common.sh@10 -- # set +x 00:13:46.004 Malloc0 00:13:46.004 [2024-11-16 16:33:23.251522] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:46.004 16:33:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.004 16:33:23 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:13:46.004 16:33:23 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:46.004 16:33:23 -- common/autotest_common.sh@10 -- # set +x 00:13:46.004 16:33:23 -- target/host_management.sh@73 -- # perfpid=82985 00:13:46.004 16:33:23 -- target/host_management.sh@74 -- # waitforlisten 82985 /var/tmp/bdevperf.sock 00:13:46.004 16:33:23 -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:13:46.004 16:33:23 -- common/autotest_common.sh@829 -- # '[' -z 82985 ']' 00:13:46.004 16:33:23 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:13:46.004 16:33:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:46.004 16:33:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:46.004 16:33:23 -- nvmf/common.sh@520 -- # config=() 00:13:46.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:46.004 16:33:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:46.004 16:33:23 -- nvmf/common.sh@520 -- # local subsystem config 00:13:46.004 16:33:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:46.004 16:33:23 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:13:46.004 16:33:23 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:13:46.004 { 00:13:46.004 "params": { 00:13:46.004 "name": "Nvme$subsystem", 00:13:46.004 "trtype": "$TEST_TRANSPORT", 00:13:46.004 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:46.004 "adrfam": "ipv4", 00:13:46.004 "trsvcid": "$NVMF_PORT", 00:13:46.004 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:46.004 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:46.004 "hdgst": ${hdgst:-false}, 00:13:46.004 "ddgst": ${ddgst:-false} 00:13:46.004 }, 00:13:46.004 "method": "bdev_nvme_attach_controller" 00:13:46.004 } 00:13:46.004 EOF 00:13:46.004 )") 00:13:46.004 16:33:23 -- common/autotest_common.sh@10 -- # set +x 00:13:46.004 16:33:23 -- nvmf/common.sh@542 -- # cat 00:13:46.004 16:33:23 -- nvmf/common.sh@544 -- # jq . 00:13:46.004 16:33:23 -- nvmf/common.sh@545 -- # IFS=, 00:13:46.004 16:33:23 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:13:46.004 "params": { 00:13:46.004 "name": "Nvme0", 00:13:46.004 "trtype": "tcp", 00:13:46.004 "traddr": "10.0.0.2", 00:13:46.004 "adrfam": "ipv4", 00:13:46.004 "trsvcid": "4420", 00:13:46.004 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:46.004 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:46.004 "hdgst": false, 00:13:46.004 "ddgst": false 00:13:46.004 }, 00:13:46.004 "method": "bdev_nvme_attach_controller" 00:13:46.004 }' 00:13:46.004 [2024-11-16 16:33:23.358675] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:46.004 [2024-11-16 16:33:23.358932] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82985 ] 00:13:46.262 [2024-11-16 16:33:23.501102] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:46.262 [2024-11-16 16:33:23.585090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:46.521 Running I/O for 10 seconds... 00:13:47.090 16:33:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:47.090 16:33:24 -- common/autotest_common.sh@862 -- # return 0 00:13:47.090 16:33:24 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:13:47.090 16:33:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.090 16:33:24 -- common/autotest_common.sh@10 -- # set +x 00:13:47.090 16:33:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.090 16:33:24 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:47.090 16:33:24 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:13:47.090 16:33:24 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:13:47.090 16:33:24 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:13:47.090 16:33:24 -- target/host_management.sh@52 -- # local ret=1 00:13:47.090 16:33:24 -- target/host_management.sh@53 -- # local i 00:13:47.090 16:33:24 -- target/host_management.sh@54 -- # (( i = 10 )) 00:13:47.090 16:33:24 -- target/host_management.sh@54 -- # (( i != 0 )) 00:13:47.090 16:33:24 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:13:47.090 16:33:24 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:13:47.090 16:33:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.090 16:33:24 -- common/autotest_common.sh@10 -- # set +x 00:13:47.090 16:33:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.090 16:33:24 -- target/host_management.sh@55 -- # read_io_count=2451 00:13:47.090 16:33:24 -- target/host_management.sh@58 -- # '[' 2451 -ge 100 ']' 00:13:47.090 16:33:24 -- target/host_management.sh@59 -- # ret=0 00:13:47.090 16:33:24 -- target/host_management.sh@60 -- # break 00:13:47.090 16:33:24 -- target/host_management.sh@64 -- # return 0 00:13:47.090 16:33:24 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:47.090 16:33:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.090 16:33:24 -- common/autotest_common.sh@10 -- # set +x 00:13:47.090 [2024-11-16 16:33:24.473495] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0be70 is same with the state(5) to be set 00:13:47.090 [2024-11-16 16:33:24.473820] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0be70 is same with the state(5) to be set 00:13:47.090 [2024-11-16 16:33:24.473969] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0be70 is same with the state(5) to be set 00:13:47.090 [2024-11-16 16:33:24.474180] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0be70 is same with the state(5) to be set 00:13:47.090 [2024-11-16 16:33:24.474336] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0be70 is same with the state(5) to be set 00:13:47.090 [2024-11-16 16:33:24.474350] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0be70 is same with the state(5) to be set 00:13:47.090 [2024-11-16 16:33:24.474358] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0be70 is same with the state(5) to be set 00:13:47.090 [2024-11-16 16:33:24.474368] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0be70 is same with the state(5) to be set 00:13:47.090 [2024-11-16 16:33:24.474394] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0be70 is same with the state(5) to be set 00:13:47.090 [2024-11-16 16:33:24.474403] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0be70 is same with the state(5) to be set 00:13:47.090 [2024-11-16 16:33:24.474412] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0be70 is same with the state(5) to be set 00:13:47.090 [2024-11-16 16:33:24.474420] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0be70 is same with the state(5) to be set 00:13:47.090 [2024-11-16 16:33:24.474428] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0be70 is same with the state(5) to be set 00:13:47.090 [2024-11-16 16:33:24.474437] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0be70 is same with the state(5) to be set 00:13:47.090 [2024-11-16 16:33:24.474453] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0be70 is same with the state(5) to be set 00:13:47.090 [2024-11-16 16:33:24.474461] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0be70 is same with the state(5) to be set 00:13:47.090 [2024-11-16 16:33:24.474477] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0be70 is same with the state(5) to be set 00:13:47.090 [2024-11-16 16:33:24.474486] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0be70 is same with the state(5) to be set 00:13:47.090 [2024-11-16 16:33:24.474495] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0be70 is same with the state(5) to be set 00:13:47.090 [2024-11-16 16:33:24.474503] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0be70 is same with the state(5) to be set 00:13:47.090 [2024-11-16 16:33:24.474511] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0be70 is same with the state(5) to be set 00:13:47.090 [2024-11-16 16:33:24.474519] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0be70 is same with the state(5) to be set 00:13:47.090 [2024-11-16 16:33:24.474528] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0be70 is same with the state(5) to be set 00:13:47.090 [2024-11-16 16:33:24.474544] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0be70 is same with the state(5) to be set 00:13:47.090 [2024-11-16 16:33:24.474552] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0be70 is same with the state(5) to be set 00:13:47.090 [2024-11-16 16:33:24.474561] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0be70 is same with the state(5) to be set 00:13:47.090 [2024-11-16 16:33:24.474570] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0be70 is same with the state(5) to be set 00:13:47.090 [2024-11-16 16:33:24.474593] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0be70 is same with the state(5) to be set 00:13:47.090 [2024-11-16 16:33:24.474604] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0be70 is same with the state(5) to be set 00:13:47.090 [2024-11-16 16:33:24.474612] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0be70 is same with the state(5) to be set 00:13:47.090 [2024-11-16 16:33:24.474621] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0be70 is same with the state(5) to be set 00:13:47.090 [2024-11-16 16:33:24.474630] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0be70 is same with the state(5) to be set 00:13:47.090 [2024-11-16 16:33:24.474641] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0be70 is same with the state(5) to be set 00:13:47.090 [2024-11-16 16:33:24.474657] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0be70 is same with the state(5) to be set 00:13:47.090 [2024-11-16 16:33:24.474666] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0be70 is same with the state(5) to be set 00:13:47.090 [2024-11-16 16:33:24.474674] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0be70 is same with the state(5) to be set 00:13:47.090 [2024-11-16 16:33:24.474683] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0be70 is same with the state(5) to be set 00:13:47.090 [2024-11-16 16:33:24.474691] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0be70 is same with the state(5) to be set 00:13:47.090 [2024-11-16 16:33:24.474714] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0be70 is same with the state(5) to be set 00:13:47.091 [2024-11-16 16:33:24.474722] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0be70 is same with the state(5) to be set 00:13:47.091 [2024-11-16 16:33:24.474730] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0be70 is same with the state(5) to be set 00:13:47.091 [2024-11-16 16:33:24.475444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.091 [2024-11-16 16:33:24.475476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.091 [2024-11-16 16:33:24.475498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.091 [2024-11-16 16:33:24.475508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.091 [2024-11-16 16:33:24.475518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.091 [2024-11-16 16:33:24.475527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.091 [2024-11-16 16:33:24.475537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.091 [2024-11-16 16:33:24.475546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.091 [2024-11-16 16:33:24.475555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.091 [2024-11-16 16:33:24.475563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.091 [2024-11-16 16:33:24.475573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.091 [2024-11-16 16:33:24.475581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.091 [2024-11-16 16:33:24.475590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.091 [2024-11-16 16:33:24.475599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.091 [2024-11-16 16:33:24.475608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.091 [2024-11-16 16:33:24.475616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.091 [2024-11-16 16:33:24.475627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.091 [2024-11-16 16:33:24.475635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.091 [2024-11-16 16:33:24.475645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.091 [2024-11-16 16:33:24.475653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.091 [2024-11-16 16:33:24.475662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.091 [2024-11-16 16:33:24.475670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.091 [2024-11-16 16:33:24.475679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.091 [2024-11-16 16:33:24.475687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.091 [2024-11-16 16:33:24.475696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.091 [2024-11-16 16:33:24.475704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.091 [2024-11-16 16:33:24.475713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.091 [2024-11-16 16:33:24.475725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.091 [2024-11-16 16:33:24.475735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.091 [2024-11-16 16:33:24.475742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.091 [2024-11-16 16:33:24.475751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.091 [2024-11-16 16:33:24.475759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.091 [2024-11-16 16:33:24.475768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.091 [2024-11-16 16:33:24.475776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.091 [2024-11-16 16:33:24.475786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.091 [2024-11-16 16:33:24.475794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.091 [2024-11-16 16:33:24.475803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.091 [2024-11-16 16:33:24.475812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.091 [2024-11-16 16:33:24.475821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.091 [2024-11-16 16:33:24.475829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.091 [2024-11-16 16:33:24.475838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.091 [2024-11-16 16:33:24.475847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.091 [2024-11-16 16:33:24.475856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.091 [2024-11-16 16:33:24.475864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.091 [2024-11-16 16:33:24.475873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.091 [2024-11-16 16:33:24.475881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.091 [2024-11-16 16:33:24.475890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.091 [2024-11-16 16:33:24.475900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.091 [2024-11-16 16:33:24.475909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.091 [2024-11-16 16:33:24.475917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.091 [2024-11-16 16:33:24.475926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.091 [2024-11-16 16:33:24.475934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.091 [2024-11-16 16:33:24.475943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.091 [2024-11-16 16:33:24.475951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.091 [2024-11-16 16:33:24.475961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.091 [2024-11-16 16:33:24.475968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.091 [2024-11-16 16:33:24.475978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.091 [2024-11-16 16:33:24.475985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.091 [2024-11-16 16:33:24.475994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.091 [2024-11-16 16:33:24.476004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.091 [2024-11-16 16:33:24.476014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.091 [2024-11-16 16:33:24.476022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.091 [2024-11-16 16:33:24.476031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.091 [2024-11-16 16:33:24.476039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.091 [2024-11-16 16:33:24.476048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.091 [2024-11-16 16:33:24.476078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.091 [2024-11-16 16:33:24.476091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.091 [2024-11-16 16:33:24.476099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.091 [2024-11-16 16:33:24.476108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.091 [2024-11-16 16:33:24.476117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.091 [2024-11-16 16:33:24.476126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.091 [2024-11-16 16:33:24.476135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.091 [2024-11-16 16:33:24.476144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.091 [2024-11-16 16:33:24.476152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.091 [2024-11-16 16:33:24.476161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.091 [2024-11-16 16:33:24.476169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.092 [2024-11-16 16:33:24.476179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.092 [2024-11-16 16:33:24.476187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.092 [2024-11-16 16:33:24.476196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.092 [2024-11-16 16:33:24.476204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.092 [2024-11-16 16:33:24.476213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.092 [2024-11-16 16:33:24.476220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.092 [2024-11-16 16:33:24.476230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.092 [2024-11-16 16:33:24.476238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.092 [2024-11-16 16:33:24.476247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.092 [2024-11-16 16:33:24.476255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.092 [2024-11-16 16:33:24.476264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.092 [2024-11-16 16:33:24.476271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.092 [2024-11-16 16:33:24.476280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.092 [2024-11-16 16:33:24.476288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.092 [2024-11-16 16:33:24.476297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.092 [2024-11-16 16:33:24.476308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.092 [2024-11-16 16:33:24.476317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.092 [2024-11-16 16:33:24.476325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.092 [2024-11-16 16:33:24.476334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.092 [2024-11-16 16:33:24.476342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.092 [2024-11-16 16:33:24.476351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.092 [2024-11-16 16:33:24.476359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.092 [2024-11-16 16:33:24.476368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.092 [2024-11-16 16:33:24.476376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.092 [2024-11-16 16:33:24.476385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.092 [2024-11-16 16:33:24.476393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.092 [2024-11-16 16:33:24.476403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.092 [2024-11-16 16:33:24.476411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.092 [2024-11-16 16:33:24.476420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.092 [2024-11-16 16:33:24.476438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.092 [2024-11-16 16:33:24.476448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.092 [2024-11-16 16:33:24.476455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.092 [2024-11-16 16:33:24.476464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.092 [2024-11-16 16:33:24.476472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.092 [2024-11-16 16:33:24.476481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.092 [2024-11-16 16:33:24.476489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.092 [2024-11-16 16:33:24.476498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.092 [2024-11-16 16:33:24.476506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.092 [2024-11-16 16:33:24.476515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.092 [2024-11-16 16:33:24.476523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.092 [2024-11-16 16:33:24.476532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.092 [2024-11-16 16:33:24.476540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.092 [2024-11-16 16:33:24.476549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.092 [2024-11-16 16:33:24.476557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.092 [2024-11-16 16:33:24.476566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.092 [2024-11-16 16:33:24.476574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.092 [2024-11-16 16:33:24.476584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.092 [2024-11-16 16:33:24.476593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.092 [2024-11-16 16:33:24.476602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.092 [2024-11-16 16:33:24.476610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.092 [2024-11-16 16:33:24.476619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:47.092 [2024-11-16 16:33:24.476627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.092 [2024-11-16 16:33:24.476725] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x15aedc0 was disconnected and freed. reset controller. 00:13:47.092 [2024-11-16 16:33:24.477731] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:13:47.092 task offset: 83328 on job bdev=Nvme0n1 fails 00:13:47.092 00:13:47.092 Latency(us) 00:13:47.092 [2024-11-16T16:33:24.583Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:47.092 [2024-11-16T16:33:24.583Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:47.092 [2024-11-16T16:33:24.583Z] Job: Nvme0n1 ended in about 0.68 seconds with error 00:13:47.092 Verification LBA range: start 0x0 length 0x400 00:13:47.092 Nvme0n1 : 0.68 3928.96 245.56 93.83 0.00 15662.14 1638.40 23473.80 00:13:47.092 [2024-11-16T16:33:24.583Z] =================================================================================================================== 00:13:47.092 [2024-11-16T16:33:24.583Z] Total : 3928.96 245.56 93.83 0.00 15662.14 1638.40 23473.80 00:13:47.092 16:33:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.092 16:33:24 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:47.092 [2024-11-16 16:33:24.479377] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:47.092 [2024-11-16 16:33:24.479402] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x150aa70 (9): Bad file descriptor 00:13:47.092 16:33:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.092 16:33:24 -- common/autotest_common.sh@10 -- # set +x 00:13:47.092 [2024-11-16 16:33:24.482254] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:13:47.092 [2024-11-16 16:33:24.482351] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:13:47.092 [2024-11-16 16:33:24.482374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:47.092 [2024-11-16 16:33:24.482399] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:13:47.092 [2024-11-16 16:33:24.482410] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:13:47.092 [2024-11-16 16:33:24.482418] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:13:47.092 [2024-11-16 16:33:24.482425] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x150aa70 00:13:47.092 [2024-11-16 16:33:24.482474] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x150aa70 (9): Bad file descriptor 00:13:47.092 [2024-11-16 16:33:24.482490] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:13:47.092 [2024-11-16 16:33:24.482499] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:13:47.092 [2024-11-16 16:33:24.482509] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:13:47.092 [2024-11-16 16:33:24.482524] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:13:47.092 16:33:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.092 16:33:24 -- target/host_management.sh@87 -- # sleep 1 00:13:48.030 16:33:25 -- target/host_management.sh@91 -- # kill -9 82985 00:13:48.030 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (82985) - No such process 00:13:48.030 16:33:25 -- target/host_management.sh@91 -- # true 00:13:48.030 16:33:25 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:13:48.030 16:33:25 -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:13:48.030 16:33:25 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:13:48.030 16:33:25 -- nvmf/common.sh@520 -- # config=() 00:13:48.030 16:33:25 -- nvmf/common.sh@520 -- # local subsystem config 00:13:48.030 16:33:25 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:13:48.030 16:33:25 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:13:48.030 { 00:13:48.030 "params": { 00:13:48.030 "name": "Nvme$subsystem", 00:13:48.030 "trtype": "$TEST_TRANSPORT", 00:13:48.030 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:48.030 "adrfam": "ipv4", 00:13:48.030 "trsvcid": "$NVMF_PORT", 00:13:48.030 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:48.030 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:48.030 "hdgst": ${hdgst:-false}, 00:13:48.030 "ddgst": ${ddgst:-false} 00:13:48.030 }, 00:13:48.030 "method": "bdev_nvme_attach_controller" 00:13:48.030 } 00:13:48.030 EOF 00:13:48.030 )") 00:13:48.030 16:33:25 -- nvmf/common.sh@542 -- # cat 00:13:48.030 16:33:25 -- nvmf/common.sh@544 -- # jq . 00:13:48.030 16:33:25 -- nvmf/common.sh@545 -- # IFS=, 00:13:48.030 16:33:25 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:13:48.030 "params": { 00:13:48.030 "name": "Nvme0", 00:13:48.030 "trtype": "tcp", 00:13:48.030 "traddr": "10.0.0.2", 00:13:48.030 "adrfam": "ipv4", 00:13:48.030 "trsvcid": "4420", 00:13:48.030 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:48.030 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:48.030 "hdgst": false, 00:13:48.030 "ddgst": false 00:13:48.030 }, 00:13:48.030 "method": "bdev_nvme_attach_controller" 00:13:48.030 }' 00:13:48.289 [2024-11-16 16:33:25.556274] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:48.289 [2024-11-16 16:33:25.556397] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83035 ] 00:13:48.289 [2024-11-16 16:33:25.697070] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:48.289 [2024-11-16 16:33:25.760009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:48.547 Running I/O for 1 seconds... 00:13:49.924 00:13:49.924 Latency(us) 00:13:49.924 [2024-11-16T16:33:27.415Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:49.924 [2024-11-16T16:33:27.415Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:49.924 Verification LBA range: start 0x0 length 0x400 00:13:49.924 Nvme0n1 : 1.01 4010.43 250.65 0.00 0.00 15680.28 1400.09 22282.24 00:13:49.924 [2024-11-16T16:33:27.415Z] =================================================================================================================== 00:13:49.924 [2024-11-16T16:33:27.415Z] Total : 4010.43 250.65 0.00 0.00 15680.28 1400.09 22282.24 00:13:49.924 16:33:27 -- target/host_management.sh@101 -- # stoptarget 00:13:49.924 16:33:27 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:13:49.924 16:33:27 -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:13:49.924 16:33:27 -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:13:49.924 16:33:27 -- target/host_management.sh@40 -- # nvmftestfini 00:13:49.924 16:33:27 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:49.924 16:33:27 -- nvmf/common.sh@116 -- # sync 00:13:49.924 16:33:27 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:49.924 16:33:27 -- nvmf/common.sh@119 -- # set +e 00:13:49.924 16:33:27 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:49.924 16:33:27 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:49.924 rmmod nvme_tcp 00:13:49.924 rmmod nvme_fabrics 00:13:49.924 rmmod nvme_keyring 00:13:49.924 16:33:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:49.924 16:33:27 -- nvmf/common.sh@123 -- # set -e 00:13:49.924 16:33:27 -- nvmf/common.sh@124 -- # return 0 00:13:49.924 16:33:27 -- nvmf/common.sh@477 -- # '[' -n 82913 ']' 00:13:49.924 16:33:27 -- nvmf/common.sh@478 -- # killprocess 82913 00:13:49.924 16:33:27 -- common/autotest_common.sh@936 -- # '[' -z 82913 ']' 00:13:49.924 16:33:27 -- common/autotest_common.sh@940 -- # kill -0 82913 00:13:49.924 16:33:27 -- common/autotest_common.sh@941 -- # uname 00:13:49.924 16:33:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:49.924 16:33:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82913 00:13:49.924 killing process with pid 82913 00:13:49.924 16:33:27 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:49.924 16:33:27 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:49.924 16:33:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82913' 00:13:49.924 16:33:27 -- common/autotest_common.sh@955 -- # kill 82913 00:13:49.924 16:33:27 -- common/autotest_common.sh@960 -- # wait 82913 00:13:50.182 [2024-11-16 16:33:27.557803] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:13:50.182 16:33:27 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:50.182 16:33:27 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:50.182 16:33:27 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:50.182 16:33:27 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:50.182 16:33:27 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:50.182 16:33:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:50.182 16:33:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:50.182 16:33:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:50.182 16:33:27 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:50.182 00:13:50.182 real 0m5.618s 00:13:50.182 user 0m23.921s 00:13:50.182 sys 0m1.404s 00:13:50.182 16:33:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:50.182 ************************************ 00:13:50.182 16:33:27 -- common/autotest_common.sh@10 -- # set +x 00:13:50.182 END TEST nvmf_host_management 00:13:50.182 ************************************ 00:13:50.182 16:33:27 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:13:50.182 00:13:50.182 real 0m6.193s 00:13:50.182 user 0m24.071s 00:13:50.182 sys 0m1.685s 00:13:50.441 16:33:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:50.441 16:33:27 -- common/autotest_common.sh@10 -- # set +x 00:13:50.441 ************************************ 00:13:50.441 END TEST nvmf_host_management 00:13:50.441 ************************************ 00:13:50.441 16:33:27 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:50.441 16:33:27 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:50.441 16:33:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:50.441 16:33:27 -- common/autotest_common.sh@10 -- # set +x 00:13:50.441 ************************************ 00:13:50.441 START TEST nvmf_lvol 00:13:50.442 ************************************ 00:13:50.442 16:33:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:50.442 * Looking for test storage... 00:13:50.442 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:50.442 16:33:27 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:50.442 16:33:27 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:50.442 16:33:27 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:50.442 16:33:27 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:50.442 16:33:27 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:50.442 16:33:27 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:50.442 16:33:27 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:50.442 16:33:27 -- scripts/common.sh@335 -- # IFS=.-: 00:13:50.442 16:33:27 -- scripts/common.sh@335 -- # read -ra ver1 00:13:50.442 16:33:27 -- scripts/common.sh@336 -- # IFS=.-: 00:13:50.442 16:33:27 -- scripts/common.sh@336 -- # read -ra ver2 00:13:50.442 16:33:27 -- scripts/common.sh@337 -- # local 'op=<' 00:13:50.442 16:33:27 -- scripts/common.sh@339 -- # ver1_l=2 00:13:50.442 16:33:27 -- scripts/common.sh@340 -- # ver2_l=1 00:13:50.442 16:33:27 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:50.442 16:33:27 -- scripts/common.sh@343 -- # case "$op" in 00:13:50.442 16:33:27 -- scripts/common.sh@344 -- # : 1 00:13:50.442 16:33:27 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:50.442 16:33:27 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:50.442 16:33:27 -- scripts/common.sh@364 -- # decimal 1 00:13:50.442 16:33:27 -- scripts/common.sh@352 -- # local d=1 00:13:50.442 16:33:27 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:50.442 16:33:27 -- scripts/common.sh@354 -- # echo 1 00:13:50.442 16:33:27 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:50.442 16:33:27 -- scripts/common.sh@365 -- # decimal 2 00:13:50.442 16:33:27 -- scripts/common.sh@352 -- # local d=2 00:13:50.442 16:33:27 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:50.442 16:33:27 -- scripts/common.sh@354 -- # echo 2 00:13:50.442 16:33:27 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:50.442 16:33:27 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:50.442 16:33:27 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:50.442 16:33:27 -- scripts/common.sh@367 -- # return 0 00:13:50.442 16:33:27 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:50.442 16:33:27 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:50.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:50.442 --rc genhtml_branch_coverage=1 00:13:50.442 --rc genhtml_function_coverage=1 00:13:50.442 --rc genhtml_legend=1 00:13:50.442 --rc geninfo_all_blocks=1 00:13:50.442 --rc geninfo_unexecuted_blocks=1 00:13:50.442 00:13:50.442 ' 00:13:50.442 16:33:27 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:50.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:50.442 --rc genhtml_branch_coverage=1 00:13:50.442 --rc genhtml_function_coverage=1 00:13:50.442 --rc genhtml_legend=1 00:13:50.442 --rc geninfo_all_blocks=1 00:13:50.442 --rc geninfo_unexecuted_blocks=1 00:13:50.442 00:13:50.442 ' 00:13:50.442 16:33:27 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:50.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:50.442 --rc genhtml_branch_coverage=1 00:13:50.442 --rc genhtml_function_coverage=1 00:13:50.442 --rc genhtml_legend=1 00:13:50.442 --rc geninfo_all_blocks=1 00:13:50.442 --rc geninfo_unexecuted_blocks=1 00:13:50.442 00:13:50.442 ' 00:13:50.442 16:33:27 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:50.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:50.442 --rc genhtml_branch_coverage=1 00:13:50.442 --rc genhtml_function_coverage=1 00:13:50.442 --rc genhtml_legend=1 00:13:50.442 --rc geninfo_all_blocks=1 00:13:50.442 --rc geninfo_unexecuted_blocks=1 00:13:50.442 00:13:50.442 ' 00:13:50.442 16:33:27 -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:50.442 16:33:27 -- nvmf/common.sh@7 -- # uname -s 00:13:50.442 16:33:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:50.442 16:33:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:50.442 16:33:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:50.442 16:33:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:50.442 16:33:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:50.442 16:33:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:50.442 16:33:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:50.442 16:33:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:50.442 16:33:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:50.442 16:33:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:50.442 16:33:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:13:50.442 16:33:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:13:50.442 16:33:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:50.442 16:33:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:50.442 16:33:27 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:50.442 16:33:27 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:50.442 16:33:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:50.442 16:33:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:50.442 16:33:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:50.442 16:33:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.442 16:33:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.442 16:33:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.442 16:33:27 -- paths/export.sh@5 -- # export PATH 00:13:50.442 16:33:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.442 16:33:27 -- nvmf/common.sh@46 -- # : 0 00:13:50.442 16:33:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:50.442 16:33:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:50.442 16:33:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:50.701 16:33:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:50.701 16:33:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:50.701 16:33:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:50.701 16:33:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:50.701 16:33:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:50.701 16:33:27 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:50.701 16:33:27 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:50.701 16:33:27 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:13:50.701 16:33:27 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:13:50.701 16:33:27 -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:50.701 16:33:27 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:13:50.701 16:33:27 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:50.701 16:33:27 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:50.701 16:33:27 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:50.701 16:33:27 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:50.701 16:33:27 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:50.701 16:33:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:50.701 16:33:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:50.701 16:33:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:50.701 16:33:27 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:50.701 16:33:27 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:50.701 16:33:27 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:50.701 16:33:27 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:50.701 16:33:27 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:50.701 16:33:27 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:50.701 16:33:27 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:50.701 16:33:27 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:50.701 16:33:27 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:50.701 16:33:27 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:50.701 16:33:27 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:50.701 16:33:27 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:50.701 16:33:27 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:50.701 16:33:27 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:50.702 16:33:27 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:50.702 16:33:27 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:50.702 16:33:27 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:50.702 16:33:27 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:50.702 16:33:27 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:50.702 16:33:27 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:50.702 Cannot find device "nvmf_tgt_br" 00:13:50.702 16:33:27 -- nvmf/common.sh@154 -- # true 00:13:50.702 16:33:27 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:50.702 Cannot find device "nvmf_tgt_br2" 00:13:50.702 16:33:27 -- nvmf/common.sh@155 -- # true 00:13:50.702 16:33:27 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:50.702 16:33:27 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:50.702 Cannot find device "nvmf_tgt_br" 00:13:50.702 16:33:27 -- nvmf/common.sh@157 -- # true 00:13:50.702 16:33:27 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:50.702 Cannot find device "nvmf_tgt_br2" 00:13:50.702 16:33:28 -- nvmf/common.sh@158 -- # true 00:13:50.702 16:33:28 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:50.702 16:33:28 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:50.702 16:33:28 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:50.702 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:50.702 16:33:28 -- nvmf/common.sh@161 -- # true 00:13:50.702 16:33:28 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:50.702 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:50.702 16:33:28 -- nvmf/common.sh@162 -- # true 00:13:50.702 16:33:28 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:50.702 16:33:28 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:50.702 16:33:28 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:50.702 16:33:28 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:50.702 16:33:28 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:50.702 16:33:28 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:50.702 16:33:28 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:50.702 16:33:28 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:50.702 16:33:28 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:50.702 16:33:28 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:50.702 16:33:28 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:50.702 16:33:28 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:50.702 16:33:28 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:50.702 16:33:28 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:50.702 16:33:28 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:50.702 16:33:28 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:50.702 16:33:28 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:50.960 16:33:28 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:50.960 16:33:28 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:50.960 16:33:28 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:50.960 16:33:28 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:50.960 16:33:28 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:50.960 16:33:28 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:50.960 16:33:28 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:50.960 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:50.960 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:13:50.960 00:13:50.960 --- 10.0.0.2 ping statistics --- 00:13:50.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:50.960 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:13:50.960 16:33:28 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:50.960 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:50.960 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:13:50.960 00:13:50.961 --- 10.0.0.3 ping statistics --- 00:13:50.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:50.961 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:13:50.961 16:33:28 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:50.961 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:50.961 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:13:50.961 00:13:50.961 --- 10.0.0.1 ping statistics --- 00:13:50.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:50.961 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:13:50.961 16:33:28 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:50.961 16:33:28 -- nvmf/common.sh@421 -- # return 0 00:13:50.961 16:33:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:50.961 16:33:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:50.961 16:33:28 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:50.961 16:33:28 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:50.961 16:33:28 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:50.961 16:33:28 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:50.961 16:33:28 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:50.961 16:33:28 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:13:50.961 16:33:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:50.961 16:33:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:50.961 16:33:28 -- common/autotest_common.sh@10 -- # set +x 00:13:50.961 16:33:28 -- nvmf/common.sh@469 -- # nvmfpid=83277 00:13:50.961 16:33:28 -- nvmf/common.sh@470 -- # waitforlisten 83277 00:13:50.961 16:33:28 -- common/autotest_common.sh@829 -- # '[' -z 83277 ']' 00:13:50.961 16:33:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:50.961 16:33:28 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:50.961 16:33:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:50.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:50.961 16:33:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:50.961 16:33:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:50.961 16:33:28 -- common/autotest_common.sh@10 -- # set +x 00:13:50.961 [2024-11-16 16:33:28.349124] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:50.961 [2024-11-16 16:33:28.349211] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:51.220 [2024-11-16 16:33:28.494810] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:51.220 [2024-11-16 16:33:28.580258] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:51.220 [2024-11-16 16:33:28.580467] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:51.220 [2024-11-16 16:33:28.580485] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:51.220 [2024-11-16 16:33:28.580498] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:51.220 [2024-11-16 16:33:28.580655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:51.220 [2024-11-16 16:33:28.581573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:51.220 [2024-11-16 16:33:28.581659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.155 16:33:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:52.155 16:33:29 -- common/autotest_common.sh@862 -- # return 0 00:13:52.155 16:33:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:52.155 16:33:29 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:52.155 16:33:29 -- common/autotest_common.sh@10 -- # set +x 00:13:52.155 16:33:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:52.155 16:33:29 -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:52.413 [2024-11-16 16:33:29.696338] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:52.413 16:33:29 -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:52.671 16:33:30 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:13:52.671 16:33:30 -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:52.930 16:33:30 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:13:52.930 16:33:30 -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:13:53.188 16:33:30 -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:13:53.447 16:33:30 -- target/nvmf_lvol.sh@29 -- # lvs=ff492d37-5098-4308-bdf0-205219e2c548 00:13:53.447 16:33:30 -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ff492d37-5098-4308-bdf0-205219e2c548 lvol 20 00:13:53.706 16:33:31 -- target/nvmf_lvol.sh@32 -- # lvol=b5bb27cc-3328-45a1-976f-2c325dc1d9c6 00:13:53.706 16:33:31 -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:53.966 16:33:31 -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b5bb27cc-3328-45a1-976f-2c325dc1d9c6 00:13:53.966 16:33:31 -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:54.225 [2024-11-16 16:33:31.633655] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:54.225 16:33:31 -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:54.483 16:33:31 -- target/nvmf_lvol.sh@42 -- # perf_pid=83425 00:13:54.483 16:33:31 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:13:54.483 16:33:31 -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:13:55.421 16:33:32 -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot b5bb27cc-3328-45a1-976f-2c325dc1d9c6 MY_SNAPSHOT 00:13:55.989 16:33:33 -- target/nvmf_lvol.sh@47 -- # snapshot=081ff75a-f29e-4d3b-9627-fcb1a4c334b7 00:13:55.989 16:33:33 -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize b5bb27cc-3328-45a1-976f-2c325dc1d9c6 30 00:13:56.248 16:33:33 -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 081ff75a-f29e-4d3b-9627-fcb1a4c334b7 MY_CLONE 00:13:56.507 16:33:33 -- target/nvmf_lvol.sh@49 -- # clone=2db13c5d-04eb-4249-a9e8-f22e1e122674 00:13:56.507 16:33:33 -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 2db13c5d-04eb-4249-a9e8-f22e1e122674 00:13:57.444 16:33:34 -- target/nvmf_lvol.sh@53 -- # wait 83425 00:14:05.563 Initializing NVMe Controllers 00:14:05.563 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:05.563 Controller IO queue size 128, less than required. 00:14:05.563 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:05.563 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:14:05.563 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:14:05.563 Initialization complete. Launching workers. 00:14:05.563 ======================================================== 00:14:05.563 Latency(us) 00:14:05.563 Device Information : IOPS MiB/s Average min max 00:14:05.563 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 7648.30 29.88 16753.99 2100.47 58041.01 00:14:05.563 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 7842.30 30.63 16336.88 570.69 146158.43 00:14:05.563 ======================================================== 00:14:05.563 Total : 15490.60 60.51 16542.82 570.69 146158.43 00:14:05.563 00:14:05.563 16:33:42 -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:05.563 16:33:42 -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete b5bb27cc-3328-45a1-976f-2c325dc1d9c6 00:14:05.563 16:33:42 -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ff492d37-5098-4308-bdf0-205219e2c548 00:14:05.563 16:33:42 -- target/nvmf_lvol.sh@60 -- # rm -f 00:14:05.563 16:33:42 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:14:05.563 16:33:42 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:14:05.563 16:33:42 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:05.563 16:33:42 -- nvmf/common.sh@116 -- # sync 00:14:05.563 16:33:42 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:05.563 16:33:42 -- nvmf/common.sh@119 -- # set +e 00:14:05.563 16:33:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:05.563 16:33:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:05.563 rmmod nvme_tcp 00:14:05.563 rmmod nvme_fabrics 00:14:05.563 rmmod nvme_keyring 00:14:05.563 16:33:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:05.563 16:33:42 -- nvmf/common.sh@123 -- # set -e 00:14:05.563 16:33:42 -- nvmf/common.sh@124 -- # return 0 00:14:05.563 16:33:42 -- nvmf/common.sh@477 -- # '[' -n 83277 ']' 00:14:05.563 16:33:42 -- nvmf/common.sh@478 -- # killprocess 83277 00:14:05.563 16:33:42 -- common/autotest_common.sh@936 -- # '[' -z 83277 ']' 00:14:05.563 16:33:42 -- common/autotest_common.sh@940 -- # kill -0 83277 00:14:05.563 16:33:42 -- common/autotest_common.sh@941 -- # uname 00:14:05.563 16:33:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:05.563 16:33:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83277 00:14:05.563 killing process with pid 83277 00:14:05.563 16:33:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:05.563 16:33:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:05.563 16:33:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83277' 00:14:05.563 16:33:42 -- common/autotest_common.sh@955 -- # kill 83277 00:14:05.563 16:33:42 -- common/autotest_common.sh@960 -- # wait 83277 00:14:05.822 16:33:43 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:05.822 16:33:43 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:05.822 16:33:43 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:05.822 16:33:43 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:06.081 16:33:43 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:06.081 16:33:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:06.081 16:33:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:06.081 16:33:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:06.081 16:33:43 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:06.081 00:14:06.081 real 0m15.633s 00:14:06.081 user 1m5.093s 00:14:06.081 sys 0m3.764s 00:14:06.081 16:33:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:06.081 16:33:43 -- common/autotest_common.sh@10 -- # set +x 00:14:06.081 ************************************ 00:14:06.081 END TEST nvmf_lvol 00:14:06.081 ************************************ 00:14:06.081 16:33:43 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:06.081 16:33:43 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:06.081 16:33:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:06.081 16:33:43 -- common/autotest_common.sh@10 -- # set +x 00:14:06.081 ************************************ 00:14:06.081 START TEST nvmf_lvs_grow 00:14:06.081 ************************************ 00:14:06.081 16:33:43 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:06.081 * Looking for test storage... 00:14:06.081 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:06.081 16:33:43 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:06.081 16:33:43 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:06.081 16:33:43 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:06.341 16:33:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:06.341 16:33:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:06.341 16:33:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:06.341 16:33:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:06.341 16:33:43 -- scripts/common.sh@335 -- # IFS=.-: 00:14:06.341 16:33:43 -- scripts/common.sh@335 -- # read -ra ver1 00:14:06.341 16:33:43 -- scripts/common.sh@336 -- # IFS=.-: 00:14:06.341 16:33:43 -- scripts/common.sh@336 -- # read -ra ver2 00:14:06.341 16:33:43 -- scripts/common.sh@337 -- # local 'op=<' 00:14:06.341 16:33:43 -- scripts/common.sh@339 -- # ver1_l=2 00:14:06.341 16:33:43 -- scripts/common.sh@340 -- # ver2_l=1 00:14:06.341 16:33:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:06.341 16:33:43 -- scripts/common.sh@343 -- # case "$op" in 00:14:06.341 16:33:43 -- scripts/common.sh@344 -- # : 1 00:14:06.341 16:33:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:06.341 16:33:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:06.341 16:33:43 -- scripts/common.sh@364 -- # decimal 1 00:14:06.341 16:33:43 -- scripts/common.sh@352 -- # local d=1 00:14:06.341 16:33:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:06.341 16:33:43 -- scripts/common.sh@354 -- # echo 1 00:14:06.341 16:33:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:06.341 16:33:43 -- scripts/common.sh@365 -- # decimal 2 00:14:06.341 16:33:43 -- scripts/common.sh@352 -- # local d=2 00:14:06.341 16:33:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:06.341 16:33:43 -- scripts/common.sh@354 -- # echo 2 00:14:06.341 16:33:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:06.341 16:33:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:06.341 16:33:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:06.341 16:33:43 -- scripts/common.sh@367 -- # return 0 00:14:06.341 16:33:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:06.341 16:33:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:06.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.341 --rc genhtml_branch_coverage=1 00:14:06.341 --rc genhtml_function_coverage=1 00:14:06.341 --rc genhtml_legend=1 00:14:06.341 --rc geninfo_all_blocks=1 00:14:06.341 --rc geninfo_unexecuted_blocks=1 00:14:06.341 00:14:06.341 ' 00:14:06.341 16:33:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:06.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.341 --rc genhtml_branch_coverage=1 00:14:06.341 --rc genhtml_function_coverage=1 00:14:06.341 --rc genhtml_legend=1 00:14:06.341 --rc geninfo_all_blocks=1 00:14:06.341 --rc geninfo_unexecuted_blocks=1 00:14:06.341 00:14:06.341 ' 00:14:06.341 16:33:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:06.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.341 --rc genhtml_branch_coverage=1 00:14:06.341 --rc genhtml_function_coverage=1 00:14:06.341 --rc genhtml_legend=1 00:14:06.341 --rc geninfo_all_blocks=1 00:14:06.341 --rc geninfo_unexecuted_blocks=1 00:14:06.341 00:14:06.341 ' 00:14:06.341 16:33:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:06.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.341 --rc genhtml_branch_coverage=1 00:14:06.341 --rc genhtml_function_coverage=1 00:14:06.341 --rc genhtml_legend=1 00:14:06.341 --rc geninfo_all_blocks=1 00:14:06.341 --rc geninfo_unexecuted_blocks=1 00:14:06.341 00:14:06.341 ' 00:14:06.341 16:33:43 -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:06.341 16:33:43 -- nvmf/common.sh@7 -- # uname -s 00:14:06.341 16:33:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:06.341 16:33:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:06.341 16:33:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:06.341 16:33:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:06.341 16:33:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:06.341 16:33:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:06.341 16:33:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:06.341 16:33:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:06.341 16:33:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:06.341 16:33:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:06.341 16:33:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:14:06.341 16:33:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:14:06.341 16:33:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:06.341 16:33:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:06.341 16:33:43 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:06.341 16:33:43 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:06.341 16:33:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:06.341 16:33:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:06.341 16:33:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:06.342 16:33:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.342 16:33:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.342 16:33:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.342 16:33:43 -- paths/export.sh@5 -- # export PATH 00:14:06.342 16:33:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.342 16:33:43 -- nvmf/common.sh@46 -- # : 0 00:14:06.342 16:33:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:06.342 16:33:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:06.342 16:33:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:06.342 16:33:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:06.342 16:33:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:06.342 16:33:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:06.342 16:33:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:06.342 16:33:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:06.342 16:33:43 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:06.342 16:33:43 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:06.342 16:33:43 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:14:06.342 16:33:43 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:06.342 16:33:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:06.342 16:33:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:06.342 16:33:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:06.342 16:33:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:06.342 16:33:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:06.342 16:33:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:06.342 16:33:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:06.342 16:33:43 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:06.342 16:33:43 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:06.342 16:33:43 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:06.342 16:33:43 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:06.342 16:33:43 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:06.342 16:33:43 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:06.342 16:33:43 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:06.342 16:33:43 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:06.342 16:33:43 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:06.342 16:33:43 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:06.342 16:33:43 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:06.342 16:33:43 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:06.342 16:33:43 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:06.342 16:33:43 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:06.342 16:33:43 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:06.342 16:33:43 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:06.342 16:33:43 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:06.342 16:33:43 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:06.342 16:33:43 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:06.342 16:33:43 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:06.342 Cannot find device "nvmf_tgt_br" 00:14:06.342 16:33:43 -- nvmf/common.sh@154 -- # true 00:14:06.342 16:33:43 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:06.342 Cannot find device "nvmf_tgt_br2" 00:14:06.342 16:33:43 -- nvmf/common.sh@155 -- # true 00:14:06.342 16:33:43 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:06.342 16:33:43 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:06.342 Cannot find device "nvmf_tgt_br" 00:14:06.342 16:33:43 -- nvmf/common.sh@157 -- # true 00:14:06.342 16:33:43 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:06.342 Cannot find device "nvmf_tgt_br2" 00:14:06.342 16:33:43 -- nvmf/common.sh@158 -- # true 00:14:06.342 16:33:43 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:06.342 16:33:43 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:06.342 16:33:43 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:06.342 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:06.342 16:33:43 -- nvmf/common.sh@161 -- # true 00:14:06.342 16:33:43 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:06.342 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:06.342 16:33:43 -- nvmf/common.sh@162 -- # true 00:14:06.342 16:33:43 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:06.342 16:33:43 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:06.342 16:33:43 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:06.342 16:33:43 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:06.342 16:33:43 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:06.342 16:33:43 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:06.342 16:33:43 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:06.342 16:33:43 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:06.601 16:33:43 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:06.601 16:33:43 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:06.601 16:33:43 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:06.601 16:33:43 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:06.601 16:33:43 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:06.601 16:33:43 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:06.601 16:33:43 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:06.601 16:33:43 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:06.601 16:33:43 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:06.601 16:33:43 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:06.601 16:33:43 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:06.601 16:33:43 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:06.601 16:33:43 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:06.601 16:33:43 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:06.601 16:33:43 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:06.601 16:33:43 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:06.601 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:06.601 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:14:06.601 00:14:06.601 --- 10.0.0.2 ping statistics --- 00:14:06.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:06.601 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:14:06.601 16:33:43 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:06.601 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:06.601 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.093 ms 00:14:06.601 00:14:06.601 --- 10.0.0.3 ping statistics --- 00:14:06.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:06.601 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:14:06.601 16:33:43 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:06.601 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:06.601 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:14:06.601 00:14:06.601 --- 10.0.0.1 ping statistics --- 00:14:06.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:06.601 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:14:06.601 16:33:43 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:06.601 16:33:43 -- nvmf/common.sh@421 -- # return 0 00:14:06.601 16:33:43 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:06.601 16:33:43 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:06.601 16:33:43 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:06.601 16:33:43 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:06.601 16:33:43 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:06.601 16:33:43 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:06.601 16:33:43 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:06.601 16:33:43 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:14:06.601 16:33:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:06.601 16:33:43 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:06.601 16:33:43 -- common/autotest_common.sh@10 -- # set +x 00:14:06.601 16:33:43 -- nvmf/common.sh@469 -- # nvmfpid=83798 00:14:06.601 16:33:43 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:06.601 16:33:43 -- nvmf/common.sh@470 -- # waitforlisten 83798 00:14:06.601 16:33:43 -- common/autotest_common.sh@829 -- # '[' -z 83798 ']' 00:14:06.601 16:33:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:06.601 16:33:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:06.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:06.601 16:33:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:06.601 16:33:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:06.601 16:33:43 -- common/autotest_common.sh@10 -- # set +x 00:14:06.601 [2024-11-16 16:33:44.007289] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:06.601 [2024-11-16 16:33:44.007391] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:06.860 [2024-11-16 16:33:44.151850] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:06.860 [2024-11-16 16:33:44.241681] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:06.860 [2024-11-16 16:33:44.241887] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:06.860 [2024-11-16 16:33:44.241910] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:06.860 [2024-11-16 16:33:44.241940] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:06.860 [2024-11-16 16:33:44.241981] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:07.796 16:33:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:07.796 16:33:45 -- common/autotest_common.sh@862 -- # return 0 00:14:07.796 16:33:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:07.796 16:33:45 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:07.796 16:33:45 -- common/autotest_common.sh@10 -- # set +x 00:14:07.796 16:33:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:07.796 16:33:45 -- target/nvmf_lvs_grow.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:08.055 [2024-11-16 16:33:45.360804] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:08.055 16:33:45 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:14:08.055 16:33:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:08.055 16:33:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:08.055 16:33:45 -- common/autotest_common.sh@10 -- # set +x 00:14:08.055 ************************************ 00:14:08.055 START TEST lvs_grow_clean 00:14:08.055 ************************************ 00:14:08.055 16:33:45 -- common/autotest_common.sh@1114 -- # lvs_grow 00:14:08.055 16:33:45 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:08.055 16:33:45 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:08.055 16:33:45 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:08.055 16:33:45 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:08.055 16:33:45 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:08.055 16:33:45 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:08.055 16:33:45 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:08.055 16:33:45 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:08.055 16:33:45 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:08.314 16:33:45 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:08.314 16:33:45 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:08.572 16:33:45 -- target/nvmf_lvs_grow.sh@28 -- # lvs=e455bca7-7aad-4d7d-8c7f-0c0afc141a9e 00:14:08.572 16:33:45 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:08.572 16:33:45 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e455bca7-7aad-4d7d-8c7f-0c0afc141a9e 00:14:08.830 16:33:46 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:08.830 16:33:46 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:08.830 16:33:46 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u e455bca7-7aad-4d7d-8c7f-0c0afc141a9e lvol 150 00:14:09.088 16:33:46 -- target/nvmf_lvs_grow.sh@33 -- # lvol=6e8e664f-c2c1-4ea5-876c-1ba9e63e8f5d 00:14:09.088 16:33:46 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:09.088 16:33:46 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:09.348 [2024-11-16 16:33:46.705913] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:09.348 [2024-11-16 16:33:46.705991] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:09.348 true 00:14:09.348 16:33:46 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e455bca7-7aad-4d7d-8c7f-0c0afc141a9e 00:14:09.348 16:33:46 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:09.621 16:33:46 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:09.621 16:33:46 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:09.893 16:33:47 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6e8e664f-c2c1-4ea5-876c-1ba9e63e8f5d 00:14:10.151 16:33:47 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:10.410 [2024-11-16 16:33:47.698401] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:10.410 16:33:47 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:10.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:10.669 16:33:47 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=83954 00:14:10.669 16:33:47 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:10.669 16:33:47 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:10.669 16:33:47 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 83954 /var/tmp/bdevperf.sock 00:14:10.669 16:33:47 -- common/autotest_common.sh@829 -- # '[' -z 83954 ']' 00:14:10.669 16:33:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:10.669 16:33:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:10.669 16:33:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:10.669 16:33:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:10.669 16:33:47 -- common/autotest_common.sh@10 -- # set +x 00:14:10.669 [2024-11-16 16:33:48.024385] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:10.669 [2024-11-16 16:33:48.024486] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83954 ] 00:14:10.928 [2024-11-16 16:33:48.168364] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.928 [2024-11-16 16:33:48.230768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:11.496 16:33:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:11.496 16:33:48 -- common/autotest_common.sh@862 -- # return 0 00:14:11.496 16:33:48 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:11.755 Nvme0n1 00:14:11.755 16:33:49 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:12.013 [ 00:14:12.013 { 00:14:12.013 "aliases": [ 00:14:12.013 "6e8e664f-c2c1-4ea5-876c-1ba9e63e8f5d" 00:14:12.013 ], 00:14:12.013 "assigned_rate_limits": { 00:14:12.013 "r_mbytes_per_sec": 0, 00:14:12.013 "rw_ios_per_sec": 0, 00:14:12.013 "rw_mbytes_per_sec": 0, 00:14:12.013 "w_mbytes_per_sec": 0 00:14:12.013 }, 00:14:12.013 "block_size": 4096, 00:14:12.013 "claimed": false, 00:14:12.013 "driver_specific": { 00:14:12.013 "mp_policy": "active_passive", 00:14:12.013 "nvme": [ 00:14:12.013 { 00:14:12.013 "ctrlr_data": { 00:14:12.013 "ana_reporting": false, 00:14:12.013 "cntlid": 1, 00:14:12.013 "firmware_revision": "24.01.1", 00:14:12.013 "model_number": "SPDK bdev Controller", 00:14:12.013 "multi_ctrlr": true, 00:14:12.013 "oacs": { 00:14:12.013 "firmware": 0, 00:14:12.013 "format": 0, 00:14:12.013 "ns_manage": 0, 00:14:12.013 "security": 0 00:14:12.013 }, 00:14:12.013 "serial_number": "SPDK0", 00:14:12.013 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:12.013 "vendor_id": "0x8086" 00:14:12.013 }, 00:14:12.013 "ns_data": { 00:14:12.013 "can_share": true, 00:14:12.013 "id": 1 00:14:12.013 }, 00:14:12.013 "trid": { 00:14:12.013 "adrfam": "IPv4", 00:14:12.013 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:12.013 "traddr": "10.0.0.2", 00:14:12.013 "trsvcid": "4420", 00:14:12.013 "trtype": "TCP" 00:14:12.013 }, 00:14:12.013 "vs": { 00:14:12.013 "nvme_version": "1.3" 00:14:12.013 } 00:14:12.013 } 00:14:12.013 ] 00:14:12.013 }, 00:14:12.013 "name": "Nvme0n1", 00:14:12.013 "num_blocks": 38912, 00:14:12.013 "product_name": "NVMe disk", 00:14:12.013 "supported_io_types": { 00:14:12.013 "abort": true, 00:14:12.013 "compare": true, 00:14:12.013 "compare_and_write": true, 00:14:12.013 "flush": true, 00:14:12.013 "nvme_admin": true, 00:14:12.013 "nvme_io": true, 00:14:12.013 "read": true, 00:14:12.013 "reset": true, 00:14:12.013 "unmap": true, 00:14:12.013 "write": true, 00:14:12.013 "write_zeroes": true 00:14:12.013 }, 00:14:12.013 "uuid": "6e8e664f-c2c1-4ea5-876c-1ba9e63e8f5d", 00:14:12.013 "zoned": false 00:14:12.013 } 00:14:12.013 ] 00:14:12.013 16:33:49 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=84006 00:14:12.013 16:33:49 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:12.013 16:33:49 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:12.013 Running I/O for 10 seconds... 00:14:13.390 Latency(us) 00:14:13.390 [2024-11-16T16:33:50.881Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:13.390 [2024-11-16T16:33:50.881Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:13.390 Nvme0n1 : 1.00 10188.00 39.80 0.00 0.00 0.00 0.00 0.00 00:14:13.390 [2024-11-16T16:33:50.881Z] =================================================================================================================== 00:14:13.390 [2024-11-16T16:33:50.881Z] Total : 10188.00 39.80 0.00 0.00 0.00 0.00 0.00 00:14:13.390 00:14:13.958 16:33:51 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e455bca7-7aad-4d7d-8c7f-0c0afc141a9e 00:14:14.216 [2024-11-16T16:33:51.707Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:14.216 Nvme0n1 : 2.00 10238.50 39.99 0.00 0.00 0.00 0.00 0.00 00:14:14.216 [2024-11-16T16:33:51.707Z] =================================================================================================================== 00:14:14.216 [2024-11-16T16:33:51.707Z] Total : 10238.50 39.99 0.00 0.00 0.00 0.00 0.00 00:14:14.216 00:14:14.216 true 00:14:14.475 16:33:51 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e455bca7-7aad-4d7d-8c7f-0c0afc141a9e 00:14:14.475 16:33:51 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:14.733 16:33:51 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:14.733 16:33:51 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:14.733 16:33:51 -- target/nvmf_lvs_grow.sh@65 -- # wait 84006 00:14:14.992 [2024-11-16T16:33:52.483Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:14.992 Nvme0n1 : 3.00 10218.67 39.92 0.00 0.00 0.00 0.00 0.00 00:14:14.992 [2024-11-16T16:33:52.483Z] =================================================================================================================== 00:14:14.992 [2024-11-16T16:33:52.483Z] Total : 10218.67 39.92 0.00 0.00 0.00 0.00 0.00 00:14:14.992 00:14:16.369 [2024-11-16T16:33:53.860Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:16.369 Nvme0n1 : 4.00 10191.75 39.81 0.00 0.00 0.00 0.00 0.00 00:14:16.369 [2024-11-16T16:33:53.860Z] =================================================================================================================== 00:14:16.369 [2024-11-16T16:33:53.860Z] Total : 10191.75 39.81 0.00 0.00 0.00 0.00 0.00 00:14:16.369 00:14:17.305 [2024-11-16T16:33:54.796Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:17.305 Nvme0n1 : 5.00 10186.60 39.79 0.00 0.00 0.00 0.00 0.00 00:14:17.305 [2024-11-16T16:33:54.796Z] =================================================================================================================== 00:14:17.305 [2024-11-16T16:33:54.796Z] Total : 10186.60 39.79 0.00 0.00 0.00 0.00 0.00 00:14:17.305 00:14:18.240 [2024-11-16T16:33:55.731Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:18.241 Nvme0n1 : 6.00 10045.50 39.24 0.00 0.00 0.00 0.00 0.00 00:14:18.241 [2024-11-16T16:33:55.732Z] =================================================================================================================== 00:14:18.241 [2024-11-16T16:33:55.732Z] Total : 10045.50 39.24 0.00 0.00 0.00 0.00 0.00 00:14:18.241 00:14:19.177 [2024-11-16T16:33:56.668Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:19.177 Nvme0n1 : 7.00 9923.43 38.76 0.00 0.00 0.00 0.00 0.00 00:14:19.177 [2024-11-16T16:33:56.668Z] =================================================================================================================== 00:14:19.177 [2024-11-16T16:33:56.668Z] Total : 9923.43 38.76 0.00 0.00 0.00 0.00 0.00 00:14:19.177 00:14:20.114 [2024-11-16T16:33:57.605Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:20.114 Nvme0n1 : 8.00 9906.75 38.70 0.00 0.00 0.00 0.00 0.00 00:14:20.114 [2024-11-16T16:33:57.605Z] =================================================================================================================== 00:14:20.114 [2024-11-16T16:33:57.605Z] Total : 9906.75 38.70 0.00 0.00 0.00 0.00 0.00 00:14:20.114 00:14:21.050 [2024-11-16T16:33:58.541Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:21.050 Nvme0n1 : 9.00 9913.33 38.72 0.00 0.00 0.00 0.00 0.00 00:14:21.050 [2024-11-16T16:33:58.541Z] =================================================================================================================== 00:14:21.050 [2024-11-16T16:33:58.541Z] Total : 9913.33 38.72 0.00 0.00 0.00 0.00 0.00 00:14:21.050 00:14:21.986 [2024-11-16T16:33:59.477Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:21.986 Nvme0n1 : 10.00 9926.80 38.78 0.00 0.00 0.00 0.00 0.00 00:14:21.986 [2024-11-16T16:33:59.477Z] =================================================================================================================== 00:14:21.986 [2024-11-16T16:33:59.477Z] Total : 9926.80 38.78 0.00 0.00 0.00 0.00 0.00 00:14:21.986 00:14:21.986 00:14:21.986 Latency(us) 00:14:21.986 [2024-11-16T16:33:59.477Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:21.986 [2024-11-16T16:33:59.477Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:21.986 Nvme0n1 : 10.00 9934.68 38.81 0.00 0.00 12879.79 6047.19 157286.40 00:14:21.986 [2024-11-16T16:33:59.477Z] =================================================================================================================== 00:14:21.986 [2024-11-16T16:33:59.477Z] Total : 9934.68 38.81 0.00 0.00 12879.79 6047.19 157286.40 00:14:21.986 0 00:14:22.245 16:33:59 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 83954 00:14:22.245 16:33:59 -- common/autotest_common.sh@936 -- # '[' -z 83954 ']' 00:14:22.245 16:33:59 -- common/autotest_common.sh@940 -- # kill -0 83954 00:14:22.245 16:33:59 -- common/autotest_common.sh@941 -- # uname 00:14:22.245 16:33:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:22.245 16:33:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83954 00:14:22.245 killing process with pid 83954 00:14:22.245 Received shutdown signal, test time was about 10.000000 seconds 00:14:22.245 00:14:22.245 Latency(us) 00:14:22.245 [2024-11-16T16:33:59.736Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:22.245 [2024-11-16T16:33:59.736Z] =================================================================================================================== 00:14:22.245 [2024-11-16T16:33:59.736Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:22.245 16:33:59 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:22.245 16:33:59 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:22.245 16:33:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83954' 00:14:22.245 16:33:59 -- common/autotest_common.sh@955 -- # kill 83954 00:14:22.245 16:33:59 -- common/autotest_common.sh@960 -- # wait 83954 00:14:22.245 16:33:59 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:22.812 16:34:00 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e455bca7-7aad-4d7d-8c7f-0c0afc141a9e 00:14:22.812 16:34:00 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:14:22.812 16:34:00 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:14:22.812 16:34:00 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:14:22.812 16:34:00 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:23.070 [2024-11-16 16:34:00.528644] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:23.329 16:34:00 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e455bca7-7aad-4d7d-8c7f-0c0afc141a9e 00:14:23.329 16:34:00 -- common/autotest_common.sh@650 -- # local es=0 00:14:23.329 16:34:00 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e455bca7-7aad-4d7d-8c7f-0c0afc141a9e 00:14:23.329 16:34:00 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:23.329 16:34:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:23.329 16:34:00 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:23.329 16:34:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:23.329 16:34:00 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:23.329 16:34:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:23.329 16:34:00 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:23.329 16:34:00 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:23.329 16:34:00 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e455bca7-7aad-4d7d-8c7f-0c0afc141a9e 00:14:23.588 2024/11/16 16:34:00 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:e455bca7-7aad-4d7d-8c7f-0c0afc141a9e], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:14:23.588 request: 00:14:23.588 { 00:14:23.588 "method": "bdev_lvol_get_lvstores", 00:14:23.588 "params": { 00:14:23.588 "uuid": "e455bca7-7aad-4d7d-8c7f-0c0afc141a9e" 00:14:23.588 } 00:14:23.588 } 00:14:23.588 Got JSON-RPC error response 00:14:23.588 GoRPCClient: error on JSON-RPC call 00:14:23.588 16:34:00 -- common/autotest_common.sh@653 -- # es=1 00:14:23.588 16:34:00 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:23.588 16:34:00 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:23.588 16:34:00 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:23.588 16:34:00 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:23.588 aio_bdev 00:14:23.588 16:34:01 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 6e8e664f-c2c1-4ea5-876c-1ba9e63e8f5d 00:14:23.588 16:34:01 -- common/autotest_common.sh@897 -- # local bdev_name=6e8e664f-c2c1-4ea5-876c-1ba9e63e8f5d 00:14:23.588 16:34:01 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:23.588 16:34:01 -- common/autotest_common.sh@899 -- # local i 00:14:23.588 16:34:01 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:23.588 16:34:01 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:23.588 16:34:01 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:23.846 16:34:01 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6e8e664f-c2c1-4ea5-876c-1ba9e63e8f5d -t 2000 00:14:24.104 [ 00:14:24.104 { 00:14:24.104 "aliases": [ 00:14:24.104 "lvs/lvol" 00:14:24.104 ], 00:14:24.104 "assigned_rate_limits": { 00:14:24.104 "r_mbytes_per_sec": 0, 00:14:24.104 "rw_ios_per_sec": 0, 00:14:24.104 "rw_mbytes_per_sec": 0, 00:14:24.104 "w_mbytes_per_sec": 0 00:14:24.104 }, 00:14:24.104 "block_size": 4096, 00:14:24.104 "claimed": false, 00:14:24.104 "driver_specific": { 00:14:24.104 "lvol": { 00:14:24.104 "base_bdev": "aio_bdev", 00:14:24.104 "clone": false, 00:14:24.104 "esnap_clone": false, 00:14:24.105 "lvol_store_uuid": "e455bca7-7aad-4d7d-8c7f-0c0afc141a9e", 00:14:24.105 "snapshot": false, 00:14:24.105 "thin_provision": false 00:14:24.105 } 00:14:24.105 }, 00:14:24.105 "name": "6e8e664f-c2c1-4ea5-876c-1ba9e63e8f5d", 00:14:24.105 "num_blocks": 38912, 00:14:24.105 "product_name": "Logical Volume", 00:14:24.105 "supported_io_types": { 00:14:24.105 "abort": false, 00:14:24.105 "compare": false, 00:14:24.105 "compare_and_write": false, 00:14:24.105 "flush": false, 00:14:24.105 "nvme_admin": false, 00:14:24.105 "nvme_io": false, 00:14:24.105 "read": true, 00:14:24.105 "reset": true, 00:14:24.105 "unmap": true, 00:14:24.105 "write": true, 00:14:24.105 "write_zeroes": true 00:14:24.105 }, 00:14:24.105 "uuid": "6e8e664f-c2c1-4ea5-876c-1ba9e63e8f5d", 00:14:24.105 "zoned": false 00:14:24.105 } 00:14:24.105 ] 00:14:24.105 16:34:01 -- common/autotest_common.sh@905 -- # return 0 00:14:24.105 16:34:01 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e455bca7-7aad-4d7d-8c7f-0c0afc141a9e 00:14:24.105 16:34:01 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:14:24.365 16:34:01 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:14:24.365 16:34:01 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e455bca7-7aad-4d7d-8c7f-0c0afc141a9e 00:14:24.365 16:34:01 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:14:24.625 16:34:02 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:14:24.625 16:34:02 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 6e8e664f-c2c1-4ea5-876c-1ba9e63e8f5d 00:14:24.884 16:34:02 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e455bca7-7aad-4d7d-8c7f-0c0afc141a9e 00:14:25.144 16:34:02 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:25.403 16:34:02 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:25.661 ************************************ 00:14:25.661 END TEST lvs_grow_clean 00:14:25.661 ************************************ 00:14:25.661 00:14:25.661 real 0m17.697s 00:14:25.661 user 0m16.884s 00:14:25.661 sys 0m2.134s 00:14:25.661 16:34:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:25.661 16:34:03 -- common/autotest_common.sh@10 -- # set +x 00:14:25.661 16:34:03 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:14:25.662 16:34:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:25.662 16:34:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:25.662 16:34:03 -- common/autotest_common.sh@10 -- # set +x 00:14:25.662 ************************************ 00:14:25.662 START TEST lvs_grow_dirty 00:14:25.662 ************************************ 00:14:25.662 16:34:03 -- common/autotest_common.sh@1114 -- # lvs_grow dirty 00:14:25.662 16:34:03 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:25.662 16:34:03 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:25.662 16:34:03 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:25.662 16:34:03 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:25.662 16:34:03 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:25.662 16:34:03 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:25.662 16:34:03 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:25.662 16:34:03 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:25.662 16:34:03 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:26.229 16:34:03 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:26.229 16:34:03 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:26.229 16:34:03 -- target/nvmf_lvs_grow.sh@28 -- # lvs=7e6383e2-afc1-41d2-8986-a2509b62aac9 00:14:26.229 16:34:03 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7e6383e2-afc1-41d2-8986-a2509b62aac9 00:14:26.229 16:34:03 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:26.488 16:34:03 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:26.488 16:34:03 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:26.488 16:34:03 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 7e6383e2-afc1-41d2-8986-a2509b62aac9 lvol 150 00:14:26.746 16:34:04 -- target/nvmf_lvs_grow.sh@33 -- # lvol=75e950e9-8b87-4e78-924f-61fdf512083e 00:14:26.746 16:34:04 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:26.746 16:34:04 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:27.004 [2024-11-16 16:34:04.319625] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:27.004 [2024-11-16 16:34:04.319680] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:27.004 true 00:14:27.004 16:34:04 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:27.004 16:34:04 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7e6383e2-afc1-41d2-8986-a2509b62aac9 00:14:27.263 16:34:04 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:27.263 16:34:04 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:27.521 16:34:04 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 75e950e9-8b87-4e78-924f-61fdf512083e 00:14:27.780 16:34:05 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:28.039 16:34:05 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:28.299 16:34:05 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:28.299 16:34:05 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=84391 00:14:28.299 16:34:05 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:28.299 16:34:05 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 84391 /var/tmp/bdevperf.sock 00:14:28.299 16:34:05 -- common/autotest_common.sh@829 -- # '[' -z 84391 ']' 00:14:28.299 16:34:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:28.299 16:34:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:28.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:28.299 16:34:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:28.299 16:34:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:28.299 16:34:05 -- common/autotest_common.sh@10 -- # set +x 00:14:28.299 [2024-11-16 16:34:05.568808] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:28.299 [2024-11-16 16:34:05.568931] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84391 ] 00:14:28.299 [2024-11-16 16:34:05.704957] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:28.299 [2024-11-16 16:34:05.770669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:29.236 16:34:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:29.236 16:34:06 -- common/autotest_common.sh@862 -- # return 0 00:14:29.236 16:34:06 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:29.495 Nvme0n1 00:14:29.495 16:34:06 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:29.754 [ 00:14:29.754 { 00:14:29.754 "aliases": [ 00:14:29.754 "75e950e9-8b87-4e78-924f-61fdf512083e" 00:14:29.754 ], 00:14:29.754 "assigned_rate_limits": { 00:14:29.754 "r_mbytes_per_sec": 0, 00:14:29.754 "rw_ios_per_sec": 0, 00:14:29.754 "rw_mbytes_per_sec": 0, 00:14:29.754 "w_mbytes_per_sec": 0 00:14:29.754 }, 00:14:29.754 "block_size": 4096, 00:14:29.754 "claimed": false, 00:14:29.754 "driver_specific": { 00:14:29.754 "mp_policy": "active_passive", 00:14:29.754 "nvme": [ 00:14:29.754 { 00:14:29.754 "ctrlr_data": { 00:14:29.754 "ana_reporting": false, 00:14:29.754 "cntlid": 1, 00:14:29.754 "firmware_revision": "24.01.1", 00:14:29.754 "model_number": "SPDK bdev Controller", 00:14:29.754 "multi_ctrlr": true, 00:14:29.754 "oacs": { 00:14:29.754 "firmware": 0, 00:14:29.754 "format": 0, 00:14:29.754 "ns_manage": 0, 00:14:29.754 "security": 0 00:14:29.754 }, 00:14:29.754 "serial_number": "SPDK0", 00:14:29.754 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:29.754 "vendor_id": "0x8086" 00:14:29.754 }, 00:14:29.754 "ns_data": { 00:14:29.754 "can_share": true, 00:14:29.754 "id": 1 00:14:29.754 }, 00:14:29.754 "trid": { 00:14:29.754 "adrfam": "IPv4", 00:14:29.754 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:29.754 "traddr": "10.0.0.2", 00:14:29.754 "trsvcid": "4420", 00:14:29.754 "trtype": "TCP" 00:14:29.754 }, 00:14:29.754 "vs": { 00:14:29.754 "nvme_version": "1.3" 00:14:29.754 } 00:14:29.754 } 00:14:29.754 ] 00:14:29.754 }, 00:14:29.754 "name": "Nvme0n1", 00:14:29.754 "num_blocks": 38912, 00:14:29.754 "product_name": "NVMe disk", 00:14:29.754 "supported_io_types": { 00:14:29.754 "abort": true, 00:14:29.754 "compare": true, 00:14:29.754 "compare_and_write": true, 00:14:29.754 "flush": true, 00:14:29.754 "nvme_admin": true, 00:14:29.754 "nvme_io": true, 00:14:29.754 "read": true, 00:14:29.754 "reset": true, 00:14:29.754 "unmap": true, 00:14:29.754 "write": true, 00:14:29.754 "write_zeroes": true 00:14:29.754 }, 00:14:29.754 "uuid": "75e950e9-8b87-4e78-924f-61fdf512083e", 00:14:29.754 "zoned": false 00:14:29.754 } 00:14:29.754 ] 00:14:29.754 16:34:07 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:29.755 16:34:07 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=84433 00:14:29.755 16:34:07 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:29.755 Running I/O for 10 seconds... 00:14:30.693 Latency(us) 00:14:30.693 [2024-11-16T16:34:08.184Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:30.693 [2024-11-16T16:34:08.184Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:30.693 Nvme0n1 : 1.00 10386.00 40.57 0.00 0.00 0.00 0.00 0.00 00:14:30.693 [2024-11-16T16:34:08.184Z] =================================================================================================================== 00:14:30.693 [2024-11-16T16:34:08.184Z] Total : 10386.00 40.57 0.00 0.00 0.00 0.00 0.00 00:14:30.693 00:14:31.629 16:34:09 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 7e6383e2-afc1-41d2-8986-a2509b62aac9 00:14:31.629 [2024-11-16T16:34:09.120Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:31.629 Nvme0n1 : 2.00 10419.00 40.70 0.00 0.00 0.00 0.00 0.00 00:14:31.629 [2024-11-16T16:34:09.120Z] =================================================================================================================== 00:14:31.629 [2024-11-16T16:34:09.120Z] Total : 10419.00 40.70 0.00 0.00 0.00 0.00 0.00 00:14:31.629 00:14:31.888 true 00:14:31.888 16:34:09 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:31.888 16:34:09 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7e6383e2-afc1-41d2-8986-a2509b62aac9 00:14:32.146 16:34:09 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:32.146 16:34:09 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:32.146 16:34:09 -- target/nvmf_lvs_grow.sh@65 -- # wait 84433 00:14:32.712 [2024-11-16T16:34:10.203Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:32.712 Nvme0n1 : 3.00 10084.33 39.39 0.00 0.00 0.00 0.00 0.00 00:14:32.712 [2024-11-16T16:34:10.203Z] =================================================================================================================== 00:14:32.712 [2024-11-16T16:34:10.203Z] Total : 10084.33 39.39 0.00 0.00 0.00 0.00 0.00 00:14:32.712 00:14:33.648 [2024-11-16T16:34:11.139Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:33.648 Nvme0n1 : 4.00 10109.50 39.49 0.00 0.00 0.00 0.00 0.00 00:14:33.648 [2024-11-16T16:34:11.139Z] =================================================================================================================== 00:14:33.648 [2024-11-16T16:34:11.139Z] Total : 10109.50 39.49 0.00 0.00 0.00 0.00 0.00 00:14:33.648 00:14:35.026 [2024-11-16T16:34:12.517Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:35.026 Nvme0n1 : 5.00 10154.40 39.67 0.00 0.00 0.00 0.00 0.00 00:14:35.026 [2024-11-16T16:34:12.517Z] =================================================================================================================== 00:14:35.026 [2024-11-16T16:34:12.517Z] Total : 10154.40 39.67 0.00 0.00 0.00 0.00 0.00 00:14:35.026 00:14:35.976 [2024-11-16T16:34:13.467Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:35.976 Nvme0n1 : 6.00 10173.33 39.74 0.00 0.00 0.00 0.00 0.00 00:14:35.976 [2024-11-16T16:34:13.467Z] =================================================================================================================== 00:14:35.976 [2024-11-16T16:34:13.467Z] Total : 10173.33 39.74 0.00 0.00 0.00 0.00 0.00 00:14:35.976 00:14:36.912 [2024-11-16T16:34:14.403Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:36.912 Nvme0n1 : 7.00 10175.14 39.75 0.00 0.00 0.00 0.00 0.00 00:14:36.912 [2024-11-16T16:34:14.403Z] =================================================================================================================== 00:14:36.912 [2024-11-16T16:34:14.403Z] Total : 10175.14 39.75 0.00 0.00 0.00 0.00 0.00 00:14:36.912 00:14:37.848 [2024-11-16T16:34:15.339Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:37.848 Nvme0n1 : 8.00 10009.50 39.10 0.00 0.00 0.00 0.00 0.00 00:14:37.848 [2024-11-16T16:34:15.339Z] =================================================================================================================== 00:14:37.848 [2024-11-16T16:34:15.339Z] Total : 10009.50 39.10 0.00 0.00 0.00 0.00 0.00 00:14:37.848 00:14:38.786 [2024-11-16T16:34:16.277Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:38.786 Nvme0n1 : 9.00 9994.44 39.04 0.00 0.00 0.00 0.00 0.00 00:14:38.786 [2024-11-16T16:34:16.277Z] =================================================================================================================== 00:14:38.786 [2024-11-16T16:34:16.277Z] Total : 9994.44 39.04 0.00 0.00 0.00 0.00 0.00 00:14:38.786 00:14:39.722 [2024-11-16T16:34:17.213Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:39.722 Nvme0n1 : 10.00 9981.90 38.99 0.00 0.00 0.00 0.00 0.00 00:14:39.722 [2024-11-16T16:34:17.213Z] =================================================================================================================== 00:14:39.722 [2024-11-16T16:34:17.213Z] Total : 9981.90 38.99 0.00 0.00 0.00 0.00 0.00 00:14:39.722 00:14:39.722 00:14:39.722 Latency(us) 00:14:39.722 [2024-11-16T16:34:17.213Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:39.722 [2024-11-16T16:34:17.213Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:39.722 Nvme0n1 : 10.01 9989.24 39.02 0.00 0.00 12809.47 4498.15 134408.38 00:14:39.722 [2024-11-16T16:34:17.213Z] =================================================================================================================== 00:14:39.722 [2024-11-16T16:34:17.213Z] Total : 9989.24 39.02 0.00 0.00 12809.47 4498.15 134408.38 00:14:39.722 0 00:14:39.722 16:34:17 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 84391 00:14:39.722 16:34:17 -- common/autotest_common.sh@936 -- # '[' -z 84391 ']' 00:14:39.722 16:34:17 -- common/autotest_common.sh@940 -- # kill -0 84391 00:14:39.722 16:34:17 -- common/autotest_common.sh@941 -- # uname 00:14:39.722 16:34:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:39.722 16:34:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84391 00:14:39.722 16:34:17 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:39.722 16:34:17 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:39.722 killing process with pid 84391 00:14:39.722 16:34:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84391' 00:14:39.722 Received shutdown signal, test time was about 10.000000 seconds 00:14:39.722 00:14:39.722 Latency(us) 00:14:39.722 [2024-11-16T16:34:17.213Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:39.722 [2024-11-16T16:34:17.213Z] =================================================================================================================== 00:14:39.722 [2024-11-16T16:34:17.213Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:39.722 16:34:17 -- common/autotest_common.sh@955 -- # kill 84391 00:14:39.722 16:34:17 -- common/autotest_common.sh@960 -- # wait 84391 00:14:39.982 16:34:17 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:40.241 16:34:17 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7e6383e2-afc1-41d2-8986-a2509b62aac9 00:14:40.241 16:34:17 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:14:40.501 16:34:17 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:14:40.501 16:34:17 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:14:40.501 16:34:17 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 83798 00:14:40.501 16:34:17 -- target/nvmf_lvs_grow.sh@74 -- # wait 83798 00:14:40.501 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 83798 Killed "${NVMF_APP[@]}" "$@" 00:14:40.501 16:34:17 -- target/nvmf_lvs_grow.sh@74 -- # true 00:14:40.501 16:34:17 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:14:40.501 16:34:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:40.501 16:34:17 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:40.501 16:34:17 -- common/autotest_common.sh@10 -- # set +x 00:14:40.501 16:34:17 -- nvmf/common.sh@469 -- # nvmfpid=84589 00:14:40.501 16:34:17 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:40.501 16:34:17 -- nvmf/common.sh@470 -- # waitforlisten 84589 00:14:40.501 16:34:17 -- common/autotest_common.sh@829 -- # '[' -z 84589 ']' 00:14:40.501 16:34:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:40.501 16:34:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:40.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:40.501 16:34:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:40.501 16:34:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:40.501 16:34:17 -- common/autotest_common.sh@10 -- # set +x 00:14:40.501 [2024-11-16 16:34:17.914614] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:40.501 [2024-11-16 16:34:17.914699] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:40.760 [2024-11-16 16:34:18.050193] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:40.760 [2024-11-16 16:34:18.121567] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:40.760 [2024-11-16 16:34:18.121713] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:40.760 [2024-11-16 16:34:18.121725] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:40.760 [2024-11-16 16:34:18.121733] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:40.760 [2024-11-16 16:34:18.121760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:41.328 16:34:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:41.328 16:34:18 -- common/autotest_common.sh@862 -- # return 0 00:14:41.328 16:34:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:41.328 16:34:18 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:41.328 16:34:18 -- common/autotest_common.sh@10 -- # set +x 00:14:41.587 16:34:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:41.587 16:34:18 -- target/nvmf_lvs_grow.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:41.855 [2024-11-16 16:34:19.081506] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:14:41.855 [2024-11-16 16:34:19.081753] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:14:41.855 [2024-11-16 16:34:19.081971] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:14:41.855 16:34:19 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:14:41.856 16:34:19 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 75e950e9-8b87-4e78-924f-61fdf512083e 00:14:41.856 16:34:19 -- common/autotest_common.sh@897 -- # local bdev_name=75e950e9-8b87-4e78-924f-61fdf512083e 00:14:41.856 16:34:19 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:41.856 16:34:19 -- common/autotest_common.sh@899 -- # local i 00:14:41.856 16:34:19 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:41.856 16:34:19 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:41.856 16:34:19 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:42.118 16:34:19 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 75e950e9-8b87-4e78-924f-61fdf512083e -t 2000 00:14:42.377 [ 00:14:42.377 { 00:14:42.377 "aliases": [ 00:14:42.377 "lvs/lvol" 00:14:42.377 ], 00:14:42.377 "assigned_rate_limits": { 00:14:42.377 "r_mbytes_per_sec": 0, 00:14:42.377 "rw_ios_per_sec": 0, 00:14:42.377 "rw_mbytes_per_sec": 0, 00:14:42.377 "w_mbytes_per_sec": 0 00:14:42.377 }, 00:14:42.377 "block_size": 4096, 00:14:42.377 "claimed": false, 00:14:42.377 "driver_specific": { 00:14:42.377 "lvol": { 00:14:42.377 "base_bdev": "aio_bdev", 00:14:42.377 "clone": false, 00:14:42.377 "esnap_clone": false, 00:14:42.377 "lvol_store_uuid": "7e6383e2-afc1-41d2-8986-a2509b62aac9", 00:14:42.377 "snapshot": false, 00:14:42.377 "thin_provision": false 00:14:42.377 } 00:14:42.377 }, 00:14:42.377 "name": "75e950e9-8b87-4e78-924f-61fdf512083e", 00:14:42.377 "num_blocks": 38912, 00:14:42.377 "product_name": "Logical Volume", 00:14:42.377 "supported_io_types": { 00:14:42.377 "abort": false, 00:14:42.377 "compare": false, 00:14:42.377 "compare_and_write": false, 00:14:42.377 "flush": false, 00:14:42.377 "nvme_admin": false, 00:14:42.377 "nvme_io": false, 00:14:42.377 "read": true, 00:14:42.377 "reset": true, 00:14:42.377 "unmap": true, 00:14:42.377 "write": true, 00:14:42.377 "write_zeroes": true 00:14:42.377 }, 00:14:42.377 "uuid": "75e950e9-8b87-4e78-924f-61fdf512083e", 00:14:42.377 "zoned": false 00:14:42.377 } 00:14:42.377 ] 00:14:42.377 16:34:19 -- common/autotest_common.sh@905 -- # return 0 00:14:42.377 16:34:19 -- target/nvmf_lvs_grow.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7e6383e2-afc1-41d2-8986-a2509b62aac9 00:14:42.377 16:34:19 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:14:42.377 16:34:19 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:14:42.377 16:34:19 -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7e6383e2-afc1-41d2-8986-a2509b62aac9 00:14:42.377 16:34:19 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:14:42.636 16:34:20 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:14:42.636 16:34:20 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:42.895 [2024-11-16 16:34:20.287206] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:42.895 16:34:20 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7e6383e2-afc1-41d2-8986-a2509b62aac9 00:14:42.895 16:34:20 -- common/autotest_common.sh@650 -- # local es=0 00:14:42.895 16:34:20 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7e6383e2-afc1-41d2-8986-a2509b62aac9 00:14:42.895 16:34:20 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:42.895 16:34:20 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:42.895 16:34:20 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:42.895 16:34:20 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:42.895 16:34:20 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:42.895 16:34:20 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:42.895 16:34:20 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:42.895 16:34:20 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:42.895 16:34:20 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7e6383e2-afc1-41d2-8986-a2509b62aac9 00:14:43.153 2024/11/16 16:34:20 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:7e6383e2-afc1-41d2-8986-a2509b62aac9], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:14:43.153 request: 00:14:43.153 { 00:14:43.153 "method": "bdev_lvol_get_lvstores", 00:14:43.153 "params": { 00:14:43.153 "uuid": "7e6383e2-afc1-41d2-8986-a2509b62aac9" 00:14:43.153 } 00:14:43.153 } 00:14:43.153 Got JSON-RPC error response 00:14:43.153 GoRPCClient: error on JSON-RPC call 00:14:43.153 16:34:20 -- common/autotest_common.sh@653 -- # es=1 00:14:43.153 16:34:20 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:43.153 16:34:20 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:43.153 16:34:20 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:43.153 16:34:20 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:43.412 aio_bdev 00:14:43.412 16:34:20 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 75e950e9-8b87-4e78-924f-61fdf512083e 00:14:43.412 16:34:20 -- common/autotest_common.sh@897 -- # local bdev_name=75e950e9-8b87-4e78-924f-61fdf512083e 00:14:43.412 16:34:20 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:43.412 16:34:20 -- common/autotest_common.sh@899 -- # local i 00:14:43.412 16:34:20 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:43.412 16:34:20 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:43.412 16:34:20 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:43.671 16:34:21 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 75e950e9-8b87-4e78-924f-61fdf512083e -t 2000 00:14:43.929 [ 00:14:43.929 { 00:14:43.929 "aliases": [ 00:14:43.929 "lvs/lvol" 00:14:43.929 ], 00:14:43.929 "assigned_rate_limits": { 00:14:43.929 "r_mbytes_per_sec": 0, 00:14:43.929 "rw_ios_per_sec": 0, 00:14:43.929 "rw_mbytes_per_sec": 0, 00:14:43.929 "w_mbytes_per_sec": 0 00:14:43.929 }, 00:14:43.929 "block_size": 4096, 00:14:43.929 "claimed": false, 00:14:43.929 "driver_specific": { 00:14:43.929 "lvol": { 00:14:43.929 "base_bdev": "aio_bdev", 00:14:43.929 "clone": false, 00:14:43.929 "esnap_clone": false, 00:14:43.929 "lvol_store_uuid": "7e6383e2-afc1-41d2-8986-a2509b62aac9", 00:14:43.929 "snapshot": false, 00:14:43.929 "thin_provision": false 00:14:43.929 } 00:14:43.929 }, 00:14:43.929 "name": "75e950e9-8b87-4e78-924f-61fdf512083e", 00:14:43.929 "num_blocks": 38912, 00:14:43.929 "product_name": "Logical Volume", 00:14:43.929 "supported_io_types": { 00:14:43.929 "abort": false, 00:14:43.929 "compare": false, 00:14:43.929 "compare_and_write": false, 00:14:43.929 "flush": false, 00:14:43.929 "nvme_admin": false, 00:14:43.929 "nvme_io": false, 00:14:43.929 "read": true, 00:14:43.929 "reset": true, 00:14:43.929 "unmap": true, 00:14:43.929 "write": true, 00:14:43.929 "write_zeroes": true 00:14:43.929 }, 00:14:43.929 "uuid": "75e950e9-8b87-4e78-924f-61fdf512083e", 00:14:43.929 "zoned": false 00:14:43.929 } 00:14:43.929 ] 00:14:43.929 16:34:21 -- common/autotest_common.sh@905 -- # return 0 00:14:43.929 16:34:21 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:14:43.929 16:34:21 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7e6383e2-afc1-41d2-8986-a2509b62aac9 00:14:44.188 16:34:21 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:14:44.188 16:34:21 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7e6383e2-afc1-41d2-8986-a2509b62aac9 00:14:44.188 16:34:21 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:14:44.447 16:34:21 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:14:44.447 16:34:21 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 75e950e9-8b87-4e78-924f-61fdf512083e 00:14:44.706 16:34:21 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7e6383e2-afc1-41d2-8986-a2509b62aac9 00:14:44.706 16:34:22 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:44.965 16:34:22 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:45.533 00:14:45.533 real 0m19.612s 00:14:45.533 user 0m40.259s 00:14:45.533 sys 0m7.901s 00:14:45.533 16:34:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:45.533 16:34:22 -- common/autotest_common.sh@10 -- # set +x 00:14:45.533 ************************************ 00:14:45.533 END TEST lvs_grow_dirty 00:14:45.533 ************************************ 00:14:45.533 16:34:22 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:14:45.533 16:34:22 -- common/autotest_common.sh@806 -- # type=--id 00:14:45.533 16:34:22 -- common/autotest_common.sh@807 -- # id=0 00:14:45.533 16:34:22 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:14:45.533 16:34:22 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:45.533 16:34:22 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:14:45.533 16:34:22 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:14:45.533 16:34:22 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:14:45.533 16:34:22 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:45.533 nvmf_trace.0 00:14:45.533 16:34:22 -- common/autotest_common.sh@821 -- # return 0 00:14:45.533 16:34:22 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:14:45.533 16:34:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:45.533 16:34:22 -- nvmf/common.sh@116 -- # sync 00:14:46.469 16:34:23 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:46.469 16:34:23 -- nvmf/common.sh@119 -- # set +e 00:14:46.469 16:34:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:46.469 16:34:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:46.469 rmmod nvme_tcp 00:14:46.469 rmmod nvme_fabrics 00:14:46.728 rmmod nvme_keyring 00:14:46.728 16:34:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:46.728 16:34:23 -- nvmf/common.sh@123 -- # set -e 00:14:46.728 16:34:23 -- nvmf/common.sh@124 -- # return 0 00:14:46.728 16:34:23 -- nvmf/common.sh@477 -- # '[' -n 84589 ']' 00:14:46.728 16:34:23 -- nvmf/common.sh@478 -- # killprocess 84589 00:14:46.728 16:34:23 -- common/autotest_common.sh@936 -- # '[' -z 84589 ']' 00:14:46.728 16:34:23 -- common/autotest_common.sh@940 -- # kill -0 84589 00:14:46.729 16:34:23 -- common/autotest_common.sh@941 -- # uname 00:14:46.729 16:34:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:46.729 16:34:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84589 00:14:46.729 16:34:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:46.729 16:34:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:46.729 killing process with pid 84589 00:14:46.729 16:34:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84589' 00:14:46.729 16:34:24 -- common/autotest_common.sh@955 -- # kill 84589 00:14:46.729 16:34:24 -- common/autotest_common.sh@960 -- # wait 84589 00:14:46.988 16:34:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:46.988 16:34:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:46.988 16:34:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:46.988 16:34:24 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:46.988 16:34:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:46.988 16:34:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:46.988 16:34:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:46.988 16:34:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:46.988 16:34:24 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:46.988 00:14:46.988 real 0m40.891s 00:14:46.988 user 1m4.227s 00:14:46.988 sys 0m11.653s 00:14:46.988 16:34:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:46.988 16:34:24 -- common/autotest_common.sh@10 -- # set +x 00:14:46.988 ************************************ 00:14:46.988 END TEST nvmf_lvs_grow 00:14:46.988 ************************************ 00:14:46.988 16:34:24 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:46.988 16:34:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:46.988 16:34:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:46.988 16:34:24 -- common/autotest_common.sh@10 -- # set +x 00:14:46.988 ************************************ 00:14:46.988 START TEST nvmf_bdev_io_wait 00:14:46.988 ************************************ 00:14:46.988 16:34:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:46.988 * Looking for test storage... 00:14:46.988 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:46.988 16:34:24 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:46.988 16:34:24 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:46.988 16:34:24 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:47.248 16:34:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:47.248 16:34:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:47.248 16:34:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:47.248 16:34:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:47.248 16:34:24 -- scripts/common.sh@335 -- # IFS=.-: 00:14:47.248 16:34:24 -- scripts/common.sh@335 -- # read -ra ver1 00:14:47.248 16:34:24 -- scripts/common.sh@336 -- # IFS=.-: 00:14:47.248 16:34:24 -- scripts/common.sh@336 -- # read -ra ver2 00:14:47.248 16:34:24 -- scripts/common.sh@337 -- # local 'op=<' 00:14:47.248 16:34:24 -- scripts/common.sh@339 -- # ver1_l=2 00:14:47.248 16:34:24 -- scripts/common.sh@340 -- # ver2_l=1 00:14:47.248 16:34:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:47.248 16:34:24 -- scripts/common.sh@343 -- # case "$op" in 00:14:47.248 16:34:24 -- scripts/common.sh@344 -- # : 1 00:14:47.248 16:34:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:47.248 16:34:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:47.248 16:34:24 -- scripts/common.sh@364 -- # decimal 1 00:14:47.248 16:34:24 -- scripts/common.sh@352 -- # local d=1 00:14:47.248 16:34:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:47.248 16:34:24 -- scripts/common.sh@354 -- # echo 1 00:14:47.248 16:34:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:47.248 16:34:24 -- scripts/common.sh@365 -- # decimal 2 00:14:47.248 16:34:24 -- scripts/common.sh@352 -- # local d=2 00:14:47.248 16:34:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:47.248 16:34:24 -- scripts/common.sh@354 -- # echo 2 00:14:47.248 16:34:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:47.248 16:34:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:47.248 16:34:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:47.248 16:34:24 -- scripts/common.sh@367 -- # return 0 00:14:47.248 16:34:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:47.248 16:34:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:47.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:47.248 --rc genhtml_branch_coverage=1 00:14:47.248 --rc genhtml_function_coverage=1 00:14:47.248 --rc genhtml_legend=1 00:14:47.248 --rc geninfo_all_blocks=1 00:14:47.248 --rc geninfo_unexecuted_blocks=1 00:14:47.248 00:14:47.248 ' 00:14:47.248 16:34:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:47.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:47.248 --rc genhtml_branch_coverage=1 00:14:47.248 --rc genhtml_function_coverage=1 00:14:47.248 --rc genhtml_legend=1 00:14:47.248 --rc geninfo_all_blocks=1 00:14:47.248 --rc geninfo_unexecuted_blocks=1 00:14:47.248 00:14:47.248 ' 00:14:47.248 16:34:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:47.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:47.248 --rc genhtml_branch_coverage=1 00:14:47.248 --rc genhtml_function_coverage=1 00:14:47.248 --rc genhtml_legend=1 00:14:47.248 --rc geninfo_all_blocks=1 00:14:47.248 --rc geninfo_unexecuted_blocks=1 00:14:47.248 00:14:47.248 ' 00:14:47.248 16:34:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:47.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:47.248 --rc genhtml_branch_coverage=1 00:14:47.248 --rc genhtml_function_coverage=1 00:14:47.248 --rc genhtml_legend=1 00:14:47.248 --rc geninfo_all_blocks=1 00:14:47.248 --rc geninfo_unexecuted_blocks=1 00:14:47.248 00:14:47.248 ' 00:14:47.248 16:34:24 -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:47.248 16:34:24 -- nvmf/common.sh@7 -- # uname -s 00:14:47.248 16:34:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:47.248 16:34:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:47.248 16:34:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:47.248 16:34:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:47.248 16:34:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:47.248 16:34:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:47.248 16:34:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:47.248 16:34:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:47.248 16:34:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:47.248 16:34:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:47.248 16:34:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:14:47.248 16:34:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:14:47.248 16:34:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:47.248 16:34:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:47.248 16:34:24 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:47.248 16:34:24 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:47.248 16:34:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:47.248 16:34:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:47.248 16:34:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:47.248 16:34:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.248 16:34:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.248 16:34:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.248 16:34:24 -- paths/export.sh@5 -- # export PATH 00:14:47.248 16:34:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.248 16:34:24 -- nvmf/common.sh@46 -- # : 0 00:14:47.248 16:34:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:47.248 16:34:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:47.248 16:34:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:47.248 16:34:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:47.248 16:34:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:47.248 16:34:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:47.248 16:34:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:47.248 16:34:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:47.248 16:34:24 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:47.248 16:34:24 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:47.248 16:34:24 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:14:47.248 16:34:24 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:47.248 16:34:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:47.248 16:34:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:47.248 16:34:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:47.248 16:34:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:47.248 16:34:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:47.248 16:34:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:47.248 16:34:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:47.248 16:34:24 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:47.248 16:34:24 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:47.248 16:34:24 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:47.248 16:34:24 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:47.248 16:34:24 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:47.248 16:34:24 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:47.248 16:34:24 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:47.248 16:34:24 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:47.248 16:34:24 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:47.248 16:34:24 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:47.248 16:34:24 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:47.248 16:34:24 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:47.248 16:34:24 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:47.248 16:34:24 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:47.248 16:34:24 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:47.248 16:34:24 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:47.248 16:34:24 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:47.248 16:34:24 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:47.248 16:34:24 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:47.248 16:34:24 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:47.248 Cannot find device "nvmf_tgt_br" 00:14:47.248 16:34:24 -- nvmf/common.sh@154 -- # true 00:14:47.248 16:34:24 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:47.248 Cannot find device "nvmf_tgt_br2" 00:14:47.248 16:34:24 -- nvmf/common.sh@155 -- # true 00:14:47.248 16:34:24 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:47.248 16:34:24 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:47.248 Cannot find device "nvmf_tgt_br" 00:14:47.248 16:34:24 -- nvmf/common.sh@157 -- # true 00:14:47.248 16:34:24 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:47.249 Cannot find device "nvmf_tgt_br2" 00:14:47.249 16:34:24 -- nvmf/common.sh@158 -- # true 00:14:47.249 16:34:24 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:47.249 16:34:24 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:47.249 16:34:24 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:47.249 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:47.249 16:34:24 -- nvmf/common.sh@161 -- # true 00:14:47.249 16:34:24 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:47.249 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:47.249 16:34:24 -- nvmf/common.sh@162 -- # true 00:14:47.249 16:34:24 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:47.249 16:34:24 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:47.249 16:34:24 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:47.249 16:34:24 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:47.249 16:34:24 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:47.249 16:34:24 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:47.508 16:34:24 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:47.508 16:34:24 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:47.508 16:34:24 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:47.508 16:34:24 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:47.508 16:34:24 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:47.508 16:34:24 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:47.508 16:34:24 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:47.508 16:34:24 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:47.508 16:34:24 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:47.508 16:34:24 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:47.508 16:34:24 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:47.508 16:34:24 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:47.508 16:34:24 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:47.508 16:34:24 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:47.508 16:34:24 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:47.508 16:34:24 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:47.508 16:34:24 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:47.508 16:34:24 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:47.508 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:47.508 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:14:47.508 00:14:47.508 --- 10.0.0.2 ping statistics --- 00:14:47.508 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:47.508 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:14:47.508 16:34:24 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:47.508 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:47.508 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:14:47.508 00:14:47.508 --- 10.0.0.3 ping statistics --- 00:14:47.508 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:47.508 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:14:47.508 16:34:24 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:47.508 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:47.508 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:14:47.508 00:14:47.508 --- 10.0.0.1 ping statistics --- 00:14:47.508 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:47.508 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:14:47.508 16:34:24 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:47.508 16:34:24 -- nvmf/common.sh@421 -- # return 0 00:14:47.508 16:34:24 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:47.508 16:34:24 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:47.508 16:34:24 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:47.508 16:34:24 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:47.508 16:34:24 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:47.508 16:34:24 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:47.508 16:34:24 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:47.508 16:34:24 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:14:47.508 16:34:24 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:47.508 16:34:24 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:47.508 16:34:24 -- common/autotest_common.sh@10 -- # set +x 00:14:47.508 16:34:24 -- nvmf/common.sh@469 -- # nvmfpid=85023 00:14:47.508 16:34:24 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:14:47.508 16:34:24 -- nvmf/common.sh@470 -- # waitforlisten 85023 00:14:47.508 16:34:24 -- common/autotest_common.sh@829 -- # '[' -z 85023 ']' 00:14:47.508 16:34:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:47.508 16:34:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:47.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:47.508 16:34:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:47.508 16:34:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:47.508 16:34:24 -- common/autotest_common.sh@10 -- # set +x 00:14:47.508 [2024-11-16 16:34:24.963545] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:47.508 [2024-11-16 16:34:24.963633] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:47.767 [2024-11-16 16:34:25.106746] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:47.767 [2024-11-16 16:34:25.179333] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:47.767 [2024-11-16 16:34:25.179483] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:47.767 [2024-11-16 16:34:25.179496] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:47.767 [2024-11-16 16:34:25.179504] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:47.767 [2024-11-16 16:34:25.179567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:47.767 [2024-11-16 16:34:25.180270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:47.767 [2024-11-16 16:34:25.180345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:47.767 [2024-11-16 16:34:25.180352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:48.703 16:34:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:48.703 16:34:25 -- common/autotest_common.sh@862 -- # return 0 00:14:48.703 16:34:25 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:48.703 16:34:25 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:48.703 16:34:25 -- common/autotest_common.sh@10 -- # set +x 00:14:48.703 16:34:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:48.703 16:34:26 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:14:48.703 16:34:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.703 16:34:26 -- common/autotest_common.sh@10 -- # set +x 00:14:48.703 16:34:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.703 16:34:26 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:14:48.703 16:34:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.703 16:34:26 -- common/autotest_common.sh@10 -- # set +x 00:14:48.703 16:34:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.703 16:34:26 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:48.703 16:34:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.703 16:34:26 -- common/autotest_common.sh@10 -- # set +x 00:14:48.703 [2024-11-16 16:34:26.124485] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:48.703 16:34:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.703 16:34:26 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:48.703 16:34:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.703 16:34:26 -- common/autotest_common.sh@10 -- # set +x 00:14:48.703 Malloc0 00:14:48.703 16:34:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.703 16:34:26 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:48.703 16:34:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.703 16:34:26 -- common/autotest_common.sh@10 -- # set +x 00:14:48.703 16:34:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.703 16:34:26 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:48.703 16:34:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.703 16:34:26 -- common/autotest_common.sh@10 -- # set +x 00:14:48.703 16:34:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.703 16:34:26 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:48.703 16:34:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.703 16:34:26 -- common/autotest_common.sh@10 -- # set +x 00:14:48.703 [2024-11-16 16:34:26.189451] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:48.963 16:34:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.963 16:34:26 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=85076 00:14:48.963 16:34:26 -- target/bdev_io_wait.sh@30 -- # READ_PID=85078 00:14:48.963 16:34:26 -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:14:48.963 16:34:26 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:14:48.963 16:34:26 -- nvmf/common.sh@520 -- # config=() 00:14:48.963 16:34:26 -- nvmf/common.sh@520 -- # local subsystem config 00:14:48.963 16:34:26 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:48.963 16:34:26 -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:14:48.963 16:34:26 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:48.963 { 00:14:48.963 "params": { 00:14:48.963 "name": "Nvme$subsystem", 00:14:48.963 "trtype": "$TEST_TRANSPORT", 00:14:48.963 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:48.963 "adrfam": "ipv4", 00:14:48.963 "trsvcid": "$NVMF_PORT", 00:14:48.963 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:48.963 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:48.963 "hdgst": ${hdgst:-false}, 00:14:48.963 "ddgst": ${ddgst:-false} 00:14:48.963 }, 00:14:48.963 "method": "bdev_nvme_attach_controller" 00:14:48.963 } 00:14:48.963 EOF 00:14:48.963 )") 00:14:48.963 16:34:26 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=85080 00:14:48.963 16:34:26 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:14:48.963 16:34:26 -- nvmf/common.sh@520 -- # config=() 00:14:48.963 16:34:26 -- nvmf/common.sh@520 -- # local subsystem config 00:14:48.963 16:34:26 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:48.963 16:34:26 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:48.963 { 00:14:48.963 "params": { 00:14:48.963 "name": "Nvme$subsystem", 00:14:48.963 "trtype": "$TEST_TRANSPORT", 00:14:48.963 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:48.963 "adrfam": "ipv4", 00:14:48.963 "trsvcid": "$NVMF_PORT", 00:14:48.963 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:48.963 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:48.963 "hdgst": ${hdgst:-false}, 00:14:48.964 "ddgst": ${ddgst:-false} 00:14:48.964 }, 00:14:48.964 "method": "bdev_nvme_attach_controller" 00:14:48.964 } 00:14:48.964 EOF 00:14:48.964 )") 00:14:48.964 16:34:26 -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:14:48.964 16:34:26 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=85082 00:14:48.964 16:34:26 -- target/bdev_io_wait.sh@35 -- # sync 00:14:48.964 16:34:26 -- nvmf/common.sh@542 -- # cat 00:14:48.964 16:34:26 -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:14:48.964 16:34:26 -- nvmf/common.sh@542 -- # cat 00:14:48.964 16:34:26 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:14:48.964 16:34:26 -- nvmf/common.sh@520 -- # config=() 00:14:48.964 16:34:26 -- nvmf/common.sh@520 -- # local subsystem config 00:14:48.964 16:34:26 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:48.964 16:34:26 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:48.964 { 00:14:48.964 "params": { 00:14:48.964 "name": "Nvme$subsystem", 00:14:48.964 "trtype": "$TEST_TRANSPORT", 00:14:48.964 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:48.964 "adrfam": "ipv4", 00:14:48.964 "trsvcid": "$NVMF_PORT", 00:14:48.964 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:48.964 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:48.964 "hdgst": ${hdgst:-false}, 00:14:48.964 "ddgst": ${ddgst:-false} 00:14:48.964 }, 00:14:48.964 "method": "bdev_nvme_attach_controller" 00:14:48.964 } 00:14:48.964 EOF 00:14:48.964 )") 00:14:48.964 16:34:26 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:14:48.964 16:34:26 -- nvmf/common.sh@520 -- # config=() 00:14:48.964 16:34:26 -- nvmf/common.sh@520 -- # local subsystem config 00:14:48.964 16:34:26 -- nvmf/common.sh@542 -- # cat 00:14:48.964 16:34:26 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:48.964 16:34:26 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:48.964 { 00:14:48.964 "params": { 00:14:48.964 "name": "Nvme$subsystem", 00:14:48.964 "trtype": "$TEST_TRANSPORT", 00:14:48.964 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:48.964 "adrfam": "ipv4", 00:14:48.964 "trsvcid": "$NVMF_PORT", 00:14:48.964 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:48.964 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:48.964 "hdgst": ${hdgst:-false}, 00:14:48.964 "ddgst": ${ddgst:-false} 00:14:48.964 }, 00:14:48.964 "method": "bdev_nvme_attach_controller" 00:14:48.964 } 00:14:48.964 EOF 00:14:48.964 )") 00:14:48.964 16:34:26 -- nvmf/common.sh@544 -- # jq . 00:14:48.964 16:34:26 -- nvmf/common.sh@542 -- # cat 00:14:48.964 16:34:26 -- nvmf/common.sh@544 -- # jq . 00:14:48.964 16:34:26 -- nvmf/common.sh@545 -- # IFS=, 00:14:48.964 16:34:26 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:48.964 "params": { 00:14:48.964 "name": "Nvme1", 00:14:48.964 "trtype": "tcp", 00:14:48.964 "traddr": "10.0.0.2", 00:14:48.964 "adrfam": "ipv4", 00:14:48.964 "trsvcid": "4420", 00:14:48.964 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:48.964 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:48.964 "hdgst": false, 00:14:48.964 "ddgst": false 00:14:48.964 }, 00:14:48.964 "method": "bdev_nvme_attach_controller" 00:14:48.964 }' 00:14:48.964 16:34:26 -- nvmf/common.sh@545 -- # IFS=, 00:14:48.964 16:34:26 -- nvmf/common.sh@544 -- # jq . 00:14:48.964 16:34:26 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:48.964 "params": { 00:14:48.964 "name": "Nvme1", 00:14:48.964 "trtype": "tcp", 00:14:48.964 "traddr": "10.0.0.2", 00:14:48.964 "adrfam": "ipv4", 00:14:48.964 "trsvcid": "4420", 00:14:48.964 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:48.964 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:48.964 "hdgst": false, 00:14:48.964 "ddgst": false 00:14:48.964 }, 00:14:48.964 "method": "bdev_nvme_attach_controller" 00:14:48.964 }' 00:14:48.964 16:34:26 -- nvmf/common.sh@545 -- # IFS=, 00:14:48.964 16:34:26 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:48.964 "params": { 00:14:48.964 "name": "Nvme1", 00:14:48.964 "trtype": "tcp", 00:14:48.964 "traddr": "10.0.0.2", 00:14:48.964 "adrfam": "ipv4", 00:14:48.964 "trsvcid": "4420", 00:14:48.964 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:48.964 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:48.964 "hdgst": false, 00:14:48.964 "ddgst": false 00:14:48.964 }, 00:14:48.964 "method": "bdev_nvme_attach_controller" 00:14:48.964 }' 00:14:48.964 16:34:26 -- nvmf/common.sh@544 -- # jq . 00:14:48.964 16:34:26 -- nvmf/common.sh@545 -- # IFS=, 00:14:48.964 16:34:26 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:48.964 "params": { 00:14:48.964 "name": "Nvme1", 00:14:48.964 "trtype": "tcp", 00:14:48.964 "traddr": "10.0.0.2", 00:14:48.964 "adrfam": "ipv4", 00:14:48.964 "trsvcid": "4420", 00:14:48.964 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:48.964 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:48.964 "hdgst": false, 00:14:48.964 "ddgst": false 00:14:48.964 }, 00:14:48.964 "method": "bdev_nvme_attach_controller" 00:14:48.964 }' 00:14:48.964 [2024-11-16 16:34:26.252776] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:48.964 [2024-11-16 16:34:26.252863] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:14:48.964 [2024-11-16 16:34:26.253625] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:48.964 [2024-11-16 16:34:26.253699] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:14:48.964 [2024-11-16 16:34:26.271700] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:48.964 [2024-11-16 16:34:26.271766] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:14:48.964 16:34:26 -- target/bdev_io_wait.sh@37 -- # wait 85076 00:14:48.964 [2024-11-16 16:34:26.288757] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:48.964 [2024-11-16 16:34:26.288840] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:14:49.223 [2024-11-16 16:34:26.469081] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.223 [2024-11-16 16:34:26.544307] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.223 [2024-11-16 16:34:26.548849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:14:49.223 [2024-11-16 16:34:26.621094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:14:49.223 [2024-11-16 16:34:26.640911] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.482 [2024-11-16 16:34:26.714335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:14:49.482 [2024-11-16 16:34:26.714683] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.482 Running I/O for 1 seconds... 00:14:49.482 Running I/O for 1 seconds... 00:14:49.482 [2024-11-16 16:34:26.804350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:14:49.482 Running I/O for 1 seconds... 00:14:49.482 Running I/O for 1 seconds... 00:14:50.417 00:14:50.417 Latency(us) 00:14:50.417 [2024-11-16T16:34:27.908Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:50.417 [2024-11-16T16:34:27.908Z] Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:14:50.417 Nvme1n1 : 1.00 221087.62 863.62 0.00 0.00 576.82 216.90 1496.90 00:14:50.417 [2024-11-16T16:34:27.908Z] =================================================================================================================== 00:14:50.417 [2024-11-16T16:34:27.908Z] Total : 221087.62 863.62 0.00 0.00 576.82 216.90 1496.90 00:14:50.417 00:14:50.417 Latency(us) 00:14:50.417 [2024-11-16T16:34:27.908Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:50.417 [2024-11-16T16:34:27.908Z] Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:14:50.417 Nvme1n1 : 1.01 9345.79 36.51 0.00 0.00 13626.50 4140.68 17277.67 00:14:50.417 [2024-11-16T16:34:27.908Z] =================================================================================================================== 00:14:50.417 [2024-11-16T16:34:27.908Z] Total : 9345.79 36.51 0.00 0.00 13626.50 4140.68 17277.67 00:14:50.417 00:14:50.417 Latency(us) 00:14:50.417 [2024-11-16T16:34:27.908Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:50.417 [2024-11-16T16:34:27.908Z] Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:14:50.417 Nvme1n1 : 1.01 6650.79 25.98 0.00 0.00 19144.96 9115.46 27167.65 00:14:50.417 [2024-11-16T16:34:27.908Z] =================================================================================================================== 00:14:50.417 [2024-11-16T16:34:27.908Z] Total : 6650.79 25.98 0.00 0.00 19144.96 9115.46 27167.65 00:14:50.676 00:14:50.676 Latency(us) 00:14:50.676 [2024-11-16T16:34:28.167Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:50.676 [2024-11-16T16:34:28.167Z] Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:14:50.676 Nvme1n1 : 1.01 6787.07 26.51 0.00 0.00 18788.99 6911.07 35985.22 00:14:50.676 [2024-11-16T16:34:28.167Z] =================================================================================================================== 00:14:50.676 [2024-11-16T16:34:28.167Z] Total : 6787.07 26.51 0.00 0.00 18788.99 6911.07 35985.22 00:14:50.934 16:34:28 -- target/bdev_io_wait.sh@38 -- # wait 85078 00:14:50.934 16:34:28 -- target/bdev_io_wait.sh@39 -- # wait 85080 00:14:50.934 16:34:28 -- target/bdev_io_wait.sh@40 -- # wait 85082 00:14:50.934 16:34:28 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:50.934 16:34:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.934 16:34:28 -- common/autotest_common.sh@10 -- # set +x 00:14:50.934 16:34:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.934 16:34:28 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:14:50.934 16:34:28 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:14:50.934 16:34:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:50.934 16:34:28 -- nvmf/common.sh@116 -- # sync 00:14:50.934 16:34:28 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:50.934 16:34:28 -- nvmf/common.sh@119 -- # set +e 00:14:50.934 16:34:28 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:50.934 16:34:28 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:50.934 rmmod nvme_tcp 00:14:50.934 rmmod nvme_fabrics 00:14:50.934 rmmod nvme_keyring 00:14:50.934 16:34:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:50.934 16:34:28 -- nvmf/common.sh@123 -- # set -e 00:14:50.934 16:34:28 -- nvmf/common.sh@124 -- # return 0 00:14:50.934 16:34:28 -- nvmf/common.sh@477 -- # '[' -n 85023 ']' 00:14:50.934 16:34:28 -- nvmf/common.sh@478 -- # killprocess 85023 00:14:50.934 16:34:28 -- common/autotest_common.sh@936 -- # '[' -z 85023 ']' 00:14:50.934 16:34:28 -- common/autotest_common.sh@940 -- # kill -0 85023 00:14:50.934 16:34:28 -- common/autotest_common.sh@941 -- # uname 00:14:50.934 16:34:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:50.934 16:34:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85023 00:14:51.193 16:34:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:51.193 16:34:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:51.193 killing process with pid 85023 00:14:51.193 16:34:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85023' 00:14:51.193 16:34:28 -- common/autotest_common.sh@955 -- # kill 85023 00:14:51.193 16:34:28 -- common/autotest_common.sh@960 -- # wait 85023 00:14:51.452 16:34:28 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:51.452 16:34:28 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:51.452 16:34:28 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:51.452 16:34:28 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:51.452 16:34:28 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:51.452 16:34:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:51.452 16:34:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:51.452 16:34:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:51.452 16:34:28 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:51.452 00:14:51.452 real 0m4.371s 00:14:51.452 user 0m19.133s 00:14:51.452 sys 0m2.076s 00:14:51.452 16:34:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:51.452 16:34:28 -- common/autotest_common.sh@10 -- # set +x 00:14:51.452 ************************************ 00:14:51.452 END TEST nvmf_bdev_io_wait 00:14:51.452 ************************************ 00:14:51.452 16:34:28 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:51.452 16:34:28 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:51.452 16:34:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:51.452 16:34:28 -- common/autotest_common.sh@10 -- # set +x 00:14:51.452 ************************************ 00:14:51.452 START TEST nvmf_queue_depth 00:14:51.452 ************************************ 00:14:51.452 16:34:28 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:51.452 * Looking for test storage... 00:14:51.452 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:51.452 16:34:28 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:51.452 16:34:28 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:51.452 16:34:28 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:51.452 16:34:28 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:51.452 16:34:28 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:51.452 16:34:28 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:51.452 16:34:28 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:51.452 16:34:28 -- scripts/common.sh@335 -- # IFS=.-: 00:14:51.452 16:34:28 -- scripts/common.sh@335 -- # read -ra ver1 00:14:51.452 16:34:28 -- scripts/common.sh@336 -- # IFS=.-: 00:14:51.452 16:34:28 -- scripts/common.sh@336 -- # read -ra ver2 00:14:51.452 16:34:28 -- scripts/common.sh@337 -- # local 'op=<' 00:14:51.452 16:34:28 -- scripts/common.sh@339 -- # ver1_l=2 00:14:51.452 16:34:28 -- scripts/common.sh@340 -- # ver2_l=1 00:14:51.452 16:34:28 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:51.452 16:34:28 -- scripts/common.sh@343 -- # case "$op" in 00:14:51.452 16:34:28 -- scripts/common.sh@344 -- # : 1 00:14:51.452 16:34:28 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:51.452 16:34:28 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:51.452 16:34:28 -- scripts/common.sh@364 -- # decimal 1 00:14:51.452 16:34:28 -- scripts/common.sh@352 -- # local d=1 00:14:51.452 16:34:28 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:51.711 16:34:28 -- scripts/common.sh@354 -- # echo 1 00:14:51.711 16:34:28 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:51.711 16:34:28 -- scripts/common.sh@365 -- # decimal 2 00:14:51.711 16:34:28 -- scripts/common.sh@352 -- # local d=2 00:14:51.711 16:34:28 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:51.711 16:34:28 -- scripts/common.sh@354 -- # echo 2 00:14:51.711 16:34:28 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:51.711 16:34:28 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:51.711 16:34:28 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:51.711 16:34:28 -- scripts/common.sh@367 -- # return 0 00:14:51.711 16:34:28 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:51.711 16:34:28 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:51.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:51.711 --rc genhtml_branch_coverage=1 00:14:51.711 --rc genhtml_function_coverage=1 00:14:51.711 --rc genhtml_legend=1 00:14:51.711 --rc geninfo_all_blocks=1 00:14:51.711 --rc geninfo_unexecuted_blocks=1 00:14:51.711 00:14:51.711 ' 00:14:51.711 16:34:28 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:51.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:51.711 --rc genhtml_branch_coverage=1 00:14:51.711 --rc genhtml_function_coverage=1 00:14:51.711 --rc genhtml_legend=1 00:14:51.711 --rc geninfo_all_blocks=1 00:14:51.711 --rc geninfo_unexecuted_blocks=1 00:14:51.711 00:14:51.711 ' 00:14:51.711 16:34:28 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:51.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:51.711 --rc genhtml_branch_coverage=1 00:14:51.711 --rc genhtml_function_coverage=1 00:14:51.711 --rc genhtml_legend=1 00:14:51.711 --rc geninfo_all_blocks=1 00:14:51.711 --rc geninfo_unexecuted_blocks=1 00:14:51.711 00:14:51.711 ' 00:14:51.712 16:34:28 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:51.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:51.712 --rc genhtml_branch_coverage=1 00:14:51.712 --rc genhtml_function_coverage=1 00:14:51.712 --rc genhtml_legend=1 00:14:51.712 --rc geninfo_all_blocks=1 00:14:51.712 --rc geninfo_unexecuted_blocks=1 00:14:51.712 00:14:51.712 ' 00:14:51.712 16:34:28 -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:51.712 16:34:28 -- nvmf/common.sh@7 -- # uname -s 00:14:51.712 16:34:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:51.712 16:34:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:51.712 16:34:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:51.712 16:34:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:51.712 16:34:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:51.712 16:34:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:51.712 16:34:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:51.712 16:34:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:51.712 16:34:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:51.712 16:34:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:51.712 16:34:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:14:51.712 16:34:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:14:51.712 16:34:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:51.712 16:34:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:51.712 16:34:28 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:51.712 16:34:28 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:51.712 16:34:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:51.712 16:34:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:51.712 16:34:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:51.712 16:34:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.712 16:34:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.712 16:34:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.712 16:34:28 -- paths/export.sh@5 -- # export PATH 00:14:51.712 16:34:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.712 16:34:28 -- nvmf/common.sh@46 -- # : 0 00:14:51.712 16:34:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:51.712 16:34:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:51.712 16:34:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:51.712 16:34:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:51.712 16:34:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:51.712 16:34:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:51.712 16:34:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:51.712 16:34:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:51.712 16:34:28 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:14:51.712 16:34:28 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:14:51.712 16:34:28 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:51.712 16:34:28 -- target/queue_depth.sh@19 -- # nvmftestinit 00:14:51.712 16:34:28 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:51.712 16:34:28 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:51.712 16:34:28 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:51.712 16:34:28 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:51.712 16:34:28 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:51.712 16:34:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:51.712 16:34:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:51.712 16:34:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:51.712 16:34:28 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:51.712 16:34:28 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:51.712 16:34:28 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:51.712 16:34:28 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:51.712 16:34:28 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:51.712 16:34:28 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:51.712 16:34:28 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:51.712 16:34:28 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:51.712 16:34:28 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:51.712 16:34:28 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:51.712 16:34:28 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:51.712 16:34:28 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:51.712 16:34:28 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:51.712 16:34:28 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:51.712 16:34:28 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:51.712 16:34:28 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:51.712 16:34:28 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:51.712 16:34:28 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:51.712 16:34:28 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:51.712 16:34:29 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:51.712 Cannot find device "nvmf_tgt_br" 00:14:51.712 16:34:29 -- nvmf/common.sh@154 -- # true 00:14:51.712 16:34:29 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:51.712 Cannot find device "nvmf_tgt_br2" 00:14:51.712 16:34:29 -- nvmf/common.sh@155 -- # true 00:14:51.712 16:34:29 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:51.712 16:34:29 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:51.712 Cannot find device "nvmf_tgt_br" 00:14:51.712 16:34:29 -- nvmf/common.sh@157 -- # true 00:14:51.712 16:34:29 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:51.712 Cannot find device "nvmf_tgt_br2" 00:14:51.712 16:34:29 -- nvmf/common.sh@158 -- # true 00:14:51.712 16:34:29 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:51.712 16:34:29 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:51.712 16:34:29 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:51.712 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:51.712 16:34:29 -- nvmf/common.sh@161 -- # true 00:14:51.712 16:34:29 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:51.712 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:51.712 16:34:29 -- nvmf/common.sh@162 -- # true 00:14:51.712 16:34:29 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:51.712 16:34:29 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:51.712 16:34:29 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:51.712 16:34:29 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:51.712 16:34:29 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:51.712 16:34:29 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:51.712 16:34:29 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:51.712 16:34:29 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:51.712 16:34:29 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:51.712 16:34:29 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:51.712 16:34:29 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:51.712 16:34:29 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:51.712 16:34:29 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:51.712 16:34:29 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:51.712 16:34:29 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:51.971 16:34:29 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:51.971 16:34:29 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:51.971 16:34:29 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:51.971 16:34:29 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:51.971 16:34:29 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:51.971 16:34:29 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:51.971 16:34:29 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:51.971 16:34:29 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:51.971 16:34:29 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:51.971 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:51.971 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:14:51.971 00:14:51.971 --- 10.0.0.2 ping statistics --- 00:14:51.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.971 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:14:51.971 16:34:29 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:51.971 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:51.971 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:14:51.971 00:14:51.971 --- 10.0.0.3 ping statistics --- 00:14:51.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.971 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:14:51.971 16:34:29 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:51.971 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:51.971 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:14:51.971 00:14:51.971 --- 10.0.0.1 ping statistics --- 00:14:51.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.971 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:14:51.971 16:34:29 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:51.971 16:34:29 -- nvmf/common.sh@421 -- # return 0 00:14:51.971 16:34:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:51.971 16:34:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:51.971 16:34:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:51.971 16:34:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:51.971 16:34:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:51.971 16:34:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:51.971 16:34:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:51.971 16:34:29 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:14:51.971 16:34:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:51.971 16:34:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:51.971 16:34:29 -- common/autotest_common.sh@10 -- # set +x 00:14:51.971 16:34:29 -- nvmf/common.sh@469 -- # nvmfpid=85331 00:14:51.971 16:34:29 -- nvmf/common.sh@470 -- # waitforlisten 85331 00:14:51.971 16:34:29 -- common/autotest_common.sh@829 -- # '[' -z 85331 ']' 00:14:51.971 16:34:29 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:51.971 16:34:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:51.971 16:34:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:51.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:51.971 16:34:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:51.971 16:34:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:51.971 16:34:29 -- common/autotest_common.sh@10 -- # set +x 00:14:51.971 [2024-11-16 16:34:29.350160] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:51.971 [2024-11-16 16:34:29.350223] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:52.230 [2024-11-16 16:34:29.485415] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:52.230 [2024-11-16 16:34:29.540721] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:52.230 [2024-11-16 16:34:29.540843] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:52.230 [2024-11-16 16:34:29.540855] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:52.230 [2024-11-16 16:34:29.540863] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:52.230 [2024-11-16 16:34:29.540910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:53.165 16:34:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:53.165 16:34:30 -- common/autotest_common.sh@862 -- # return 0 00:14:53.165 16:34:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:53.165 16:34:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:53.165 16:34:30 -- common/autotest_common.sh@10 -- # set +x 00:14:53.165 16:34:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:53.165 16:34:30 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:53.165 16:34:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.165 16:34:30 -- common/autotest_common.sh@10 -- # set +x 00:14:53.165 [2024-11-16 16:34:30.426315] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:53.165 16:34:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.165 16:34:30 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:53.165 16:34:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.165 16:34:30 -- common/autotest_common.sh@10 -- # set +x 00:14:53.165 Malloc0 00:14:53.165 16:34:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.165 16:34:30 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:53.165 16:34:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.165 16:34:30 -- common/autotest_common.sh@10 -- # set +x 00:14:53.165 16:34:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.165 16:34:30 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:53.165 16:34:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.165 16:34:30 -- common/autotest_common.sh@10 -- # set +x 00:14:53.165 16:34:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.165 16:34:30 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:53.165 16:34:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.165 16:34:30 -- common/autotest_common.sh@10 -- # set +x 00:14:53.165 [2024-11-16 16:34:30.485456] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:53.165 16:34:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.165 16:34:30 -- target/queue_depth.sh@30 -- # bdevperf_pid=85381 00:14:53.165 16:34:30 -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:14:53.165 16:34:30 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:53.165 16:34:30 -- target/queue_depth.sh@33 -- # waitforlisten 85381 /var/tmp/bdevperf.sock 00:14:53.165 16:34:30 -- common/autotest_common.sh@829 -- # '[' -z 85381 ']' 00:14:53.165 16:34:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:53.165 16:34:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:53.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:53.165 16:34:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:53.165 16:34:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:53.165 16:34:30 -- common/autotest_common.sh@10 -- # set +x 00:14:53.165 [2024-11-16 16:34:30.531446] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:53.165 [2024-11-16 16:34:30.531542] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85381 ] 00:14:53.423 [2024-11-16 16:34:30.665687] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:53.423 [2024-11-16 16:34:30.735886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.359 16:34:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:54.359 16:34:31 -- common/autotest_common.sh@862 -- # return 0 00:14:54.359 16:34:31 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:54.359 16:34:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.359 16:34:31 -- common/autotest_common.sh@10 -- # set +x 00:14:54.359 NVMe0n1 00:14:54.359 16:34:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.359 16:34:31 -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:54.359 Running I/O for 10 seconds... 00:15:04.341 00:15:04.341 Latency(us) 00:15:04.341 [2024-11-16T16:34:41.832Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:04.341 [2024-11-16T16:34:41.832Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:15:04.341 Verification LBA range: start 0x0 length 0x4000 00:15:04.341 NVMe0n1 : 10.05 17221.64 67.27 0.00 0.00 59276.98 10843.23 48377.48 00:15:04.341 [2024-11-16T16:34:41.832Z] =================================================================================================================== 00:15:04.341 [2024-11-16T16:34:41.832Z] Total : 17221.64 67.27 0.00 0.00 59276.98 10843.23 48377.48 00:15:04.341 0 00:15:04.341 16:34:41 -- target/queue_depth.sh@39 -- # killprocess 85381 00:15:04.341 16:34:41 -- common/autotest_common.sh@936 -- # '[' -z 85381 ']' 00:15:04.341 16:34:41 -- common/autotest_common.sh@940 -- # kill -0 85381 00:15:04.341 16:34:41 -- common/autotest_common.sh@941 -- # uname 00:15:04.341 16:34:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:04.341 16:34:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85381 00:15:04.341 16:34:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:04.341 16:34:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:04.341 killing process with pid 85381 00:15:04.341 16:34:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85381' 00:15:04.341 Received shutdown signal, test time was about 10.000000 seconds 00:15:04.341 00:15:04.341 Latency(us) 00:15:04.341 [2024-11-16T16:34:41.832Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:04.341 [2024-11-16T16:34:41.832Z] =================================================================================================================== 00:15:04.341 [2024-11-16T16:34:41.832Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:04.341 16:34:41 -- common/autotest_common.sh@955 -- # kill 85381 00:15:04.341 16:34:41 -- common/autotest_common.sh@960 -- # wait 85381 00:15:04.600 16:34:42 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:04.600 16:34:42 -- target/queue_depth.sh@43 -- # nvmftestfini 00:15:04.600 16:34:42 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:04.600 16:34:42 -- nvmf/common.sh@116 -- # sync 00:15:04.858 16:34:42 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:04.858 16:34:42 -- nvmf/common.sh@119 -- # set +e 00:15:04.858 16:34:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:04.858 16:34:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:04.858 rmmod nvme_tcp 00:15:04.858 rmmod nvme_fabrics 00:15:04.858 rmmod nvme_keyring 00:15:04.858 16:34:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:04.858 16:34:42 -- nvmf/common.sh@123 -- # set -e 00:15:04.858 16:34:42 -- nvmf/common.sh@124 -- # return 0 00:15:04.858 16:34:42 -- nvmf/common.sh@477 -- # '[' -n 85331 ']' 00:15:04.858 16:34:42 -- nvmf/common.sh@478 -- # killprocess 85331 00:15:04.858 16:34:42 -- common/autotest_common.sh@936 -- # '[' -z 85331 ']' 00:15:04.858 16:34:42 -- common/autotest_common.sh@940 -- # kill -0 85331 00:15:04.858 16:34:42 -- common/autotest_common.sh@941 -- # uname 00:15:04.858 16:34:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:04.858 16:34:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85331 00:15:04.858 16:34:42 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:04.858 16:34:42 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:04.858 killing process with pid 85331 00:15:04.858 16:34:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85331' 00:15:04.858 16:34:42 -- common/autotest_common.sh@955 -- # kill 85331 00:15:04.858 16:34:42 -- common/autotest_common.sh@960 -- # wait 85331 00:15:05.116 16:34:42 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:05.116 16:34:42 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:05.116 16:34:42 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:05.116 16:34:42 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:05.116 16:34:42 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:05.116 16:34:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:05.116 16:34:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:05.116 16:34:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:05.116 16:34:42 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:05.116 00:15:05.116 real 0m13.651s 00:15:05.116 user 0m22.800s 00:15:05.116 sys 0m2.595s 00:15:05.116 16:34:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:05.116 16:34:42 -- common/autotest_common.sh@10 -- # set +x 00:15:05.116 ************************************ 00:15:05.116 END TEST nvmf_queue_depth 00:15:05.116 ************************************ 00:15:05.116 16:34:42 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:05.116 16:34:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:05.116 16:34:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:05.116 16:34:42 -- common/autotest_common.sh@10 -- # set +x 00:15:05.116 ************************************ 00:15:05.116 START TEST nvmf_multipath 00:15:05.116 ************************************ 00:15:05.116 16:34:42 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:05.116 * Looking for test storage... 00:15:05.116 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:05.116 16:34:42 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:05.116 16:34:42 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:05.116 16:34:42 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:05.376 16:34:42 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:05.376 16:34:42 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:05.376 16:34:42 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:05.376 16:34:42 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:05.376 16:34:42 -- scripts/common.sh@335 -- # IFS=.-: 00:15:05.376 16:34:42 -- scripts/common.sh@335 -- # read -ra ver1 00:15:05.376 16:34:42 -- scripts/common.sh@336 -- # IFS=.-: 00:15:05.376 16:34:42 -- scripts/common.sh@336 -- # read -ra ver2 00:15:05.376 16:34:42 -- scripts/common.sh@337 -- # local 'op=<' 00:15:05.376 16:34:42 -- scripts/common.sh@339 -- # ver1_l=2 00:15:05.376 16:34:42 -- scripts/common.sh@340 -- # ver2_l=1 00:15:05.376 16:34:42 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:05.376 16:34:42 -- scripts/common.sh@343 -- # case "$op" in 00:15:05.376 16:34:42 -- scripts/common.sh@344 -- # : 1 00:15:05.376 16:34:42 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:05.376 16:34:42 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:05.376 16:34:42 -- scripts/common.sh@364 -- # decimal 1 00:15:05.376 16:34:42 -- scripts/common.sh@352 -- # local d=1 00:15:05.376 16:34:42 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:05.376 16:34:42 -- scripts/common.sh@354 -- # echo 1 00:15:05.376 16:34:42 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:05.376 16:34:42 -- scripts/common.sh@365 -- # decimal 2 00:15:05.376 16:34:42 -- scripts/common.sh@352 -- # local d=2 00:15:05.376 16:34:42 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:05.376 16:34:42 -- scripts/common.sh@354 -- # echo 2 00:15:05.376 16:34:42 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:05.376 16:34:42 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:05.376 16:34:42 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:05.376 16:34:42 -- scripts/common.sh@367 -- # return 0 00:15:05.376 16:34:42 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:05.376 16:34:42 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:05.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:05.376 --rc genhtml_branch_coverage=1 00:15:05.376 --rc genhtml_function_coverage=1 00:15:05.376 --rc genhtml_legend=1 00:15:05.376 --rc geninfo_all_blocks=1 00:15:05.376 --rc geninfo_unexecuted_blocks=1 00:15:05.376 00:15:05.376 ' 00:15:05.376 16:34:42 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:05.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:05.376 --rc genhtml_branch_coverage=1 00:15:05.376 --rc genhtml_function_coverage=1 00:15:05.376 --rc genhtml_legend=1 00:15:05.376 --rc geninfo_all_blocks=1 00:15:05.376 --rc geninfo_unexecuted_blocks=1 00:15:05.376 00:15:05.376 ' 00:15:05.376 16:34:42 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:05.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:05.376 --rc genhtml_branch_coverage=1 00:15:05.376 --rc genhtml_function_coverage=1 00:15:05.376 --rc genhtml_legend=1 00:15:05.376 --rc geninfo_all_blocks=1 00:15:05.376 --rc geninfo_unexecuted_blocks=1 00:15:05.376 00:15:05.376 ' 00:15:05.376 16:34:42 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:05.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:05.376 --rc genhtml_branch_coverage=1 00:15:05.376 --rc genhtml_function_coverage=1 00:15:05.376 --rc genhtml_legend=1 00:15:05.376 --rc geninfo_all_blocks=1 00:15:05.376 --rc geninfo_unexecuted_blocks=1 00:15:05.376 00:15:05.376 ' 00:15:05.376 16:34:42 -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:05.376 16:34:42 -- nvmf/common.sh@7 -- # uname -s 00:15:05.376 16:34:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:05.376 16:34:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:05.376 16:34:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:05.376 16:34:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:05.376 16:34:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:05.376 16:34:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:05.376 16:34:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:05.376 16:34:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:05.376 16:34:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:05.376 16:34:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:05.376 16:34:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:15:05.376 16:34:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:15:05.376 16:34:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:05.376 16:34:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:05.376 16:34:42 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:05.376 16:34:42 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:05.376 16:34:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:05.376 16:34:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:05.376 16:34:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:05.376 16:34:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.376 16:34:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.376 16:34:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.376 16:34:42 -- paths/export.sh@5 -- # export PATH 00:15:05.377 16:34:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.377 16:34:42 -- nvmf/common.sh@46 -- # : 0 00:15:05.377 16:34:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:05.377 16:34:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:05.377 16:34:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:05.377 16:34:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:05.377 16:34:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:05.377 16:34:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:05.377 16:34:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:05.377 16:34:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:05.377 16:34:42 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:05.377 16:34:42 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:05.377 16:34:42 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:05.377 16:34:42 -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:05.377 16:34:42 -- target/multipath.sh@43 -- # nvmftestinit 00:15:05.377 16:34:42 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:05.377 16:34:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:05.377 16:34:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:05.377 16:34:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:05.377 16:34:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:05.377 16:34:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:05.377 16:34:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:05.377 16:34:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:05.377 16:34:42 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:05.377 16:34:42 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:05.377 16:34:42 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:05.377 16:34:42 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:05.377 16:34:42 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:05.377 16:34:42 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:05.377 16:34:42 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:05.377 16:34:42 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:05.377 16:34:42 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:05.377 16:34:42 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:05.377 16:34:42 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:05.377 16:34:42 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:05.377 16:34:42 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:05.377 16:34:42 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:05.377 16:34:42 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:05.377 16:34:42 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:05.377 16:34:42 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:05.377 16:34:42 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:05.377 16:34:42 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:05.377 16:34:42 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:05.377 Cannot find device "nvmf_tgt_br" 00:15:05.377 16:34:42 -- nvmf/common.sh@154 -- # true 00:15:05.377 16:34:42 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:05.377 Cannot find device "nvmf_tgt_br2" 00:15:05.377 16:34:42 -- nvmf/common.sh@155 -- # true 00:15:05.377 16:34:42 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:05.377 16:34:42 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:05.377 Cannot find device "nvmf_tgt_br" 00:15:05.377 16:34:42 -- nvmf/common.sh@157 -- # true 00:15:05.377 16:34:42 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:05.377 Cannot find device "nvmf_tgt_br2" 00:15:05.377 16:34:42 -- nvmf/common.sh@158 -- # true 00:15:05.377 16:34:42 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:05.377 16:34:42 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:05.377 16:34:42 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:05.377 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:05.377 16:34:42 -- nvmf/common.sh@161 -- # true 00:15:05.377 16:34:42 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:05.377 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:05.377 16:34:42 -- nvmf/common.sh@162 -- # true 00:15:05.377 16:34:42 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:05.377 16:34:42 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:05.377 16:34:42 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:05.377 16:34:42 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:05.636 16:34:42 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:05.636 16:34:42 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:05.636 16:34:42 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:05.636 16:34:42 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:05.636 16:34:42 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:05.636 16:34:42 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:05.636 16:34:42 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:05.636 16:34:42 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:05.636 16:34:42 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:05.636 16:34:42 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:05.636 16:34:42 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:05.636 16:34:42 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:05.636 16:34:42 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:05.636 16:34:42 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:05.636 16:34:42 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:05.636 16:34:42 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:05.636 16:34:43 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:05.636 16:34:43 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:05.636 16:34:43 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:05.636 16:34:43 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:05.636 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:05.636 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:15:05.636 00:15:05.637 --- 10.0.0.2 ping statistics --- 00:15:05.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:05.637 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:15:05.637 16:34:43 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:05.637 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:05.637 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:15:05.637 00:15:05.637 --- 10.0.0.3 ping statistics --- 00:15:05.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:05.637 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:15:05.637 16:34:43 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:05.637 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:05.637 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:15:05.637 00:15:05.637 --- 10.0.0.1 ping statistics --- 00:15:05.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:05.637 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:15:05.637 16:34:43 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:05.637 16:34:43 -- nvmf/common.sh@421 -- # return 0 00:15:05.637 16:34:43 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:05.637 16:34:43 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:05.637 16:34:43 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:05.637 16:34:43 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:05.637 16:34:43 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:05.637 16:34:43 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:05.637 16:34:43 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:05.637 16:34:43 -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:15:05.637 16:34:43 -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:15:05.637 16:34:43 -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:15:05.637 16:34:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:05.637 16:34:43 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:05.637 16:34:43 -- common/autotest_common.sh@10 -- # set +x 00:15:05.637 16:34:43 -- nvmf/common.sh@469 -- # nvmfpid=85716 00:15:05.637 16:34:43 -- nvmf/common.sh@470 -- # waitforlisten 85716 00:15:05.637 16:34:43 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:05.637 16:34:43 -- common/autotest_common.sh@829 -- # '[' -z 85716 ']' 00:15:05.637 16:34:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:05.637 16:34:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:05.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:05.637 16:34:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:05.637 16:34:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:05.637 16:34:43 -- common/autotest_common.sh@10 -- # set +x 00:15:05.637 [2024-11-16 16:34:43.118303] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:05.637 [2024-11-16 16:34:43.118388] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:05.895 [2024-11-16 16:34:43.256213] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:05.895 [2024-11-16 16:34:43.327675] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:05.895 [2024-11-16 16:34:43.327830] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:05.895 [2024-11-16 16:34:43.327842] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:05.895 [2024-11-16 16:34:43.327850] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:05.895 [2024-11-16 16:34:43.328306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:05.895 [2024-11-16 16:34:43.328615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:05.895 [2024-11-16 16:34:43.328763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:05.895 [2024-11-16 16:34:43.328764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:06.864 16:34:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:06.864 16:34:44 -- common/autotest_common.sh@862 -- # return 0 00:15:06.864 16:34:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:06.864 16:34:44 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:06.864 16:34:44 -- common/autotest_common.sh@10 -- # set +x 00:15:06.864 16:34:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:06.864 16:34:44 -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:07.122 [2024-11-16 16:34:44.377642] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:07.122 16:34:44 -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:07.381 Malloc0 00:15:07.381 16:34:44 -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:15:07.639 16:34:44 -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:07.899 16:34:45 -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:07.899 [2024-11-16 16:34:45.369241] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:07.899 16:34:45 -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:08.158 [2024-11-16 16:34:45.577480] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:08.158 16:34:45 -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 --hostid=dcaf3c85-349e-474a-91c8-b5dfcb47b007 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:15:08.417 16:34:45 -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 --hostid=dcaf3c85-349e-474a-91c8-b5dfcb47b007 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:15:08.676 16:34:46 -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:15:08.676 16:34:46 -- common/autotest_common.sh@1187 -- # local i=0 00:15:08.676 16:34:46 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:15:08.676 16:34:46 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:15:08.676 16:34:46 -- common/autotest_common.sh@1194 -- # sleep 2 00:15:10.580 16:34:48 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:15:10.580 16:34:48 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:15:10.580 16:34:48 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:15:10.580 16:34:48 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:15:10.580 16:34:48 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:15:10.580 16:34:48 -- common/autotest_common.sh@1197 -- # return 0 00:15:10.580 16:34:48 -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:15:10.580 16:34:48 -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:15:10.580 16:34:48 -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:15:10.580 16:34:48 -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:15:10.580 16:34:48 -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:15:10.580 16:34:48 -- target/multipath.sh@38 -- # echo nvme-subsys0 00:15:10.580 16:34:48 -- target/multipath.sh@38 -- # return 0 00:15:10.580 16:34:48 -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:15:10.580 16:34:48 -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:15:10.580 16:34:48 -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:15:10.580 16:34:48 -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:15:10.580 16:34:48 -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:15:10.580 16:34:48 -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:15:10.580 16:34:48 -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:15:10.580 16:34:48 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:15:10.580 16:34:48 -- target/multipath.sh@22 -- # local timeout=20 00:15:10.580 16:34:48 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:10.580 16:34:48 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:10.580 16:34:48 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:10.580 16:34:48 -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:15:10.580 16:34:48 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:15:10.580 16:34:48 -- target/multipath.sh@22 -- # local timeout=20 00:15:10.580 16:34:48 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:10.580 16:34:48 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:10.580 16:34:48 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:10.580 16:34:48 -- target/multipath.sh@85 -- # echo numa 00:15:10.580 16:34:48 -- target/multipath.sh@88 -- # fio_pid=85854 00:15:10.580 16:34:48 -- target/multipath.sh@90 -- # sleep 1 00:15:10.580 16:34:48 -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:15:10.580 [global] 00:15:10.580 thread=1 00:15:10.580 invalidate=1 00:15:10.580 rw=randrw 00:15:10.580 time_based=1 00:15:10.580 runtime=6 00:15:10.580 ioengine=libaio 00:15:10.580 direct=1 00:15:10.580 bs=4096 00:15:10.580 iodepth=128 00:15:10.580 norandommap=0 00:15:10.580 numjobs=1 00:15:10.580 00:15:10.580 verify_dump=1 00:15:10.580 verify_backlog=512 00:15:10.580 verify_state_save=0 00:15:10.580 do_verify=1 00:15:10.580 verify=crc32c-intel 00:15:10.580 [job0] 00:15:10.580 filename=/dev/nvme0n1 00:15:10.840 Could not set queue depth (nvme0n1) 00:15:10.840 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:10.840 fio-3.35 00:15:10.840 Starting 1 thread 00:15:11.777 16:34:49 -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:15:12.036 16:34:49 -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:12.036 16:34:49 -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:15:12.036 16:34:49 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:15:12.036 16:34:49 -- target/multipath.sh@22 -- # local timeout=20 00:15:12.036 16:34:49 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:12.036 16:34:49 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:12.036 16:34:49 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:12.036 16:34:49 -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:15:12.036 16:34:49 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:15:12.036 16:34:49 -- target/multipath.sh@22 -- # local timeout=20 00:15:12.036 16:34:49 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:12.036 16:34:49 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:12.036 16:34:49 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:12.036 16:34:49 -- target/multipath.sh@25 -- # sleep 1s 00:15:13.415 16:34:50 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:13.415 16:34:50 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:13.415 16:34:50 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:13.415 16:34:50 -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:13.415 16:34:50 -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:15:13.673 16:34:51 -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:15:13.673 16:34:51 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:15:13.673 16:34:51 -- target/multipath.sh@22 -- # local timeout=20 00:15:13.673 16:34:51 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:13.673 16:34:51 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:13.673 16:34:51 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:13.673 16:34:51 -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:15:13.673 16:34:51 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:15:13.673 16:34:51 -- target/multipath.sh@22 -- # local timeout=20 00:15:13.673 16:34:51 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:13.673 16:34:51 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:13.673 16:34:51 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:13.673 16:34:51 -- target/multipath.sh@25 -- # sleep 1s 00:15:14.609 16:34:52 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:14.609 16:34:52 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:14.609 16:34:52 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:14.609 16:34:52 -- target/multipath.sh@104 -- # wait 85854 00:15:17.141 00:15:17.141 job0: (groupid=0, jobs=1): err= 0: pid=85880: Sat Nov 16 16:34:54 2024 00:15:17.141 read: IOPS=13.3k, BW=52.1MiB/s (54.7MB/s)(313MiB/6005msec) 00:15:17.141 slat (usec): min=3, max=5359, avg=43.21, stdev=193.45 00:15:17.141 clat (usec): min=2013, max=21252, avg=6616.78, stdev=1076.32 00:15:17.141 lat (usec): min=2036, max=21269, avg=6659.99, stdev=1084.72 00:15:17.141 clat percentiles (usec): 00:15:17.141 | 1.00th=[ 4178], 5.00th=[ 5211], 10.00th=[ 5538], 20.00th=[ 5866], 00:15:17.141 | 30.00th=[ 5997], 40.00th=[ 6194], 50.00th=[ 6521], 60.00th=[ 6783], 00:15:17.141 | 70.00th=[ 7046], 80.00th=[ 7308], 90.00th=[ 7832], 95.00th=[ 8586], 00:15:17.141 | 99.00th=[10028], 99.50th=[10552], 99.90th=[11994], 99.95th=[12649], 00:15:17.141 | 99.99th=[13173] 00:15:17.141 bw ( KiB/s): min=15744, max=34824, per=51.61%, avg=27555.09, stdev=6131.58, samples=11 00:15:17.141 iops : min= 3936, max= 8706, avg=6888.73, stdev=1532.87, samples=11 00:15:17.141 write: IOPS=7681, BW=30.0MiB/s (31.5MB/s)(157MiB/5247msec); 0 zone resets 00:15:17.141 slat (usec): min=8, max=2098, avg=53.67, stdev=128.22 00:15:17.141 clat (usec): min=647, max=12806, avg=5735.70, stdev=887.78 00:15:17.141 lat (usec): min=949, max=12840, avg=5789.38, stdev=890.20 00:15:17.141 clat percentiles (usec): 00:15:17.141 | 1.00th=[ 3294], 5.00th=[ 4178], 10.00th=[ 4883], 20.00th=[ 5211], 00:15:17.141 | 30.00th=[ 5407], 40.00th=[ 5604], 50.00th=[ 5735], 60.00th=[ 5932], 00:15:17.141 | 70.00th=[ 6063], 80.00th=[ 6259], 90.00th=[ 6587], 95.00th=[ 6915], 00:15:17.141 | 99.00th=[ 8455], 99.50th=[ 9372], 99.90th=[11207], 99.95th=[11863], 00:15:17.141 | 99.99th=[12387] 00:15:17.141 bw ( KiB/s): min=16352, max=34256, per=89.57%, avg=27521.82, stdev=5748.24, samples=11 00:15:17.141 iops : min= 4088, max= 8564, avg=6880.45, stdev=1437.06, samples=11 00:15:17.141 lat (usec) : 750=0.01%, 1000=0.01% 00:15:17.141 lat (msec) : 2=0.01%, 4=1.91%, 10=97.28%, 20=0.78%, 50=0.01% 00:15:17.141 cpu : usr=5.96%, sys=24.27%, ctx=7307, majf=0, minf=123 00:15:17.141 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:15:17.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:17.141 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:17.141 issued rwts: total=80147,40303,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:17.141 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:17.141 00:15:17.141 Run status group 0 (all jobs): 00:15:17.141 READ: bw=52.1MiB/s (54.7MB/s), 52.1MiB/s-52.1MiB/s (54.7MB/s-54.7MB/s), io=313MiB (328MB), run=6005-6005msec 00:15:17.141 WRITE: bw=30.0MiB/s (31.5MB/s), 30.0MiB/s-30.0MiB/s (31.5MB/s-31.5MB/s), io=157MiB (165MB), run=5247-5247msec 00:15:17.141 00:15:17.141 Disk stats (read/write): 00:15:17.141 nvme0n1: ios=79149/39435, merge=0/0, ticks=485079/210213, in_queue=695292, util=98.70% 00:15:17.141 16:34:54 -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:15:17.400 16:34:54 -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:15:17.400 16:34:54 -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:15:17.400 16:34:54 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:15:17.400 16:34:54 -- target/multipath.sh@22 -- # local timeout=20 00:15:17.400 16:34:54 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:17.400 16:34:54 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:17.400 16:34:54 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:17.400 16:34:54 -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:15:17.400 16:34:54 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:15:17.400 16:34:54 -- target/multipath.sh@22 -- # local timeout=20 00:15:17.400 16:34:54 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:17.400 16:34:54 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:17.400 16:34:54 -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:15:17.400 16:34:54 -- target/multipath.sh@25 -- # sleep 1s 00:15:18.776 16:34:55 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:18.776 16:34:55 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:18.776 16:34:55 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:18.776 16:34:55 -- target/multipath.sh@113 -- # echo round-robin 00:15:18.776 16:34:55 -- target/multipath.sh@116 -- # fio_pid=86007 00:15:18.776 16:34:55 -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:15:18.776 16:34:55 -- target/multipath.sh@118 -- # sleep 1 00:15:18.776 [global] 00:15:18.776 thread=1 00:15:18.776 invalidate=1 00:15:18.776 rw=randrw 00:15:18.776 time_based=1 00:15:18.776 runtime=6 00:15:18.776 ioengine=libaio 00:15:18.776 direct=1 00:15:18.776 bs=4096 00:15:18.776 iodepth=128 00:15:18.776 norandommap=0 00:15:18.776 numjobs=1 00:15:18.776 00:15:18.776 verify_dump=1 00:15:18.776 verify_backlog=512 00:15:18.776 verify_state_save=0 00:15:18.776 do_verify=1 00:15:18.776 verify=crc32c-intel 00:15:18.776 [job0] 00:15:18.776 filename=/dev/nvme0n1 00:15:18.776 Could not set queue depth (nvme0n1) 00:15:18.776 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:18.776 fio-3.35 00:15:18.776 Starting 1 thread 00:15:19.712 16:34:56 -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:15:19.712 16:34:57 -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:19.971 16:34:57 -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:15:19.971 16:34:57 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:15:19.971 16:34:57 -- target/multipath.sh@22 -- # local timeout=20 00:15:19.971 16:34:57 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:19.971 16:34:57 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:19.971 16:34:57 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:19.971 16:34:57 -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:15:19.971 16:34:57 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:15:19.971 16:34:57 -- target/multipath.sh@22 -- # local timeout=20 00:15:19.971 16:34:57 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:19.971 16:34:57 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:19.971 16:34:57 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:19.971 16:34:57 -- target/multipath.sh@25 -- # sleep 1s 00:15:20.907 16:34:58 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:20.907 16:34:58 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:20.907 16:34:58 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:20.907 16:34:58 -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:21.166 16:34:58 -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:15:21.733 16:34:58 -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:15:21.733 16:34:58 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:15:21.733 16:34:58 -- target/multipath.sh@22 -- # local timeout=20 00:15:21.733 16:34:58 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:21.733 16:34:58 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:21.733 16:34:58 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:21.733 16:34:58 -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:15:21.733 16:34:58 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:15:21.733 16:34:58 -- target/multipath.sh@22 -- # local timeout=20 00:15:21.733 16:34:58 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:21.733 16:34:58 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:21.733 16:34:58 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:21.733 16:34:58 -- target/multipath.sh@25 -- # sleep 1s 00:15:22.669 16:34:59 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:22.669 16:34:59 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:22.669 16:34:59 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:22.669 16:34:59 -- target/multipath.sh@132 -- # wait 86007 00:15:25.204 00:15:25.204 job0: (groupid=0, jobs=1): err= 0: pid=86028: Sat Nov 16 16:35:02 2024 00:15:25.204 read: IOPS=13.9k, BW=54.1MiB/s (56.7MB/s)(325MiB/6001msec) 00:15:25.204 slat (usec): min=4, max=4624, avg=37.25, stdev=174.67 00:15:25.204 clat (usec): min=437, max=16470, avg=6380.66, stdev=1435.93 00:15:25.204 lat (usec): min=449, max=16486, avg=6417.91, stdev=1444.18 00:15:25.204 clat percentiles (usec): 00:15:25.204 | 1.00th=[ 2769], 5.00th=[ 3916], 10.00th=[ 4686], 20.00th=[ 5604], 00:15:25.204 | 30.00th=[ 5866], 40.00th=[ 6063], 50.00th=[ 6259], 60.00th=[ 6587], 00:15:25.204 | 70.00th=[ 6980], 80.00th=[ 7308], 90.00th=[ 7898], 95.00th=[ 8848], 00:15:25.204 | 99.00th=[10814], 99.50th=[11469], 99.90th=[12649], 99.95th=[13304], 00:15:25.204 | 99.99th=[15664] 00:15:25.204 bw ( KiB/s): min=16880, max=36918, per=51.05%, avg=28289.27, stdev=7832.27, samples=11 00:15:25.204 iops : min= 4220, max= 9229, avg=7072.27, stdev=1958.01, samples=11 00:15:25.204 write: IOPS=8389, BW=32.8MiB/s (34.4MB/s)(167MiB/5103msec); 0 zone resets 00:15:25.204 slat (usec): min=10, max=2143, avg=47.02, stdev=116.56 00:15:25.204 clat (usec): min=265, max=12946, avg=5400.34, stdev=1361.72 00:15:25.205 lat (usec): min=298, max=12970, avg=5447.36, stdev=1366.76 00:15:25.205 clat percentiles (usec): 00:15:25.205 | 1.00th=[ 2114], 5.00th=[ 2868], 10.00th=[ 3359], 20.00th=[ 4293], 00:15:25.205 | 30.00th=[ 5080], 40.00th=[ 5407], 50.00th=[ 5604], 60.00th=[ 5800], 00:15:25.205 | 70.00th=[ 5997], 80.00th=[ 6259], 90.00th=[ 6718], 95.00th=[ 7373], 00:15:25.205 | 99.00th=[ 9110], 99.50th=[ 9634], 99.90th=[10814], 99.95th=[11076], 00:15:25.205 | 99.99th=[12518] 00:15:25.205 bw ( KiB/s): min=17040, max=37780, per=84.32%, avg=28296.36, stdev=7433.85, samples=11 00:15:25.205 iops : min= 4260, max= 9445, avg=7074.09, stdev=1858.46, samples=11 00:15:25.205 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.02% 00:15:25.205 lat (msec) : 2=0.37%, 4=8.88%, 10=89.33%, 20=1.39% 00:15:25.205 cpu : usr=6.65%, sys=23.33%, ctx=8016, majf=0, minf=127 00:15:25.205 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:15:25.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:25.205 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:25.205 issued rwts: total=83136,42812,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:25.205 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:25.205 00:15:25.205 Run status group 0 (all jobs): 00:15:25.205 READ: bw=54.1MiB/s (56.7MB/s), 54.1MiB/s-54.1MiB/s (56.7MB/s-56.7MB/s), io=325MiB (341MB), run=6001-6001msec 00:15:25.205 WRITE: bw=32.8MiB/s (34.4MB/s), 32.8MiB/s-32.8MiB/s (34.4MB/s-34.4MB/s), io=167MiB (175MB), run=5103-5103msec 00:15:25.205 00:15:25.205 Disk stats (read/write): 00:15:25.205 nvme0n1: ios=82057/41979, merge=0/0, ticks=485550/208906, in_queue=694456, util=98.60% 00:15:25.205 16:35:02 -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:25.205 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:15:25.205 16:35:02 -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:25.205 16:35:02 -- common/autotest_common.sh@1208 -- # local i=0 00:15:25.205 16:35:02 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:15:25.205 16:35:02 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:25.205 16:35:02 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:15:25.205 16:35:02 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:25.205 16:35:02 -- common/autotest_common.sh@1220 -- # return 0 00:15:25.205 16:35:02 -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:25.205 16:35:02 -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:15:25.205 16:35:02 -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:15:25.205 16:35:02 -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:15:25.205 16:35:02 -- target/multipath.sh@144 -- # nvmftestfini 00:15:25.205 16:35:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:25.205 16:35:02 -- nvmf/common.sh@116 -- # sync 00:15:25.205 16:35:02 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:25.205 16:35:02 -- nvmf/common.sh@119 -- # set +e 00:15:25.205 16:35:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:25.205 16:35:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:25.205 rmmod nvme_tcp 00:15:25.205 rmmod nvme_fabrics 00:15:25.205 rmmod nvme_keyring 00:15:25.464 16:35:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:25.464 16:35:02 -- nvmf/common.sh@123 -- # set -e 00:15:25.464 16:35:02 -- nvmf/common.sh@124 -- # return 0 00:15:25.464 16:35:02 -- nvmf/common.sh@477 -- # '[' -n 85716 ']' 00:15:25.464 16:35:02 -- nvmf/common.sh@478 -- # killprocess 85716 00:15:25.464 16:35:02 -- common/autotest_common.sh@936 -- # '[' -z 85716 ']' 00:15:25.464 16:35:02 -- common/autotest_common.sh@940 -- # kill -0 85716 00:15:25.464 16:35:02 -- common/autotest_common.sh@941 -- # uname 00:15:25.464 16:35:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:25.464 16:35:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85716 00:15:25.464 killing process with pid 85716 00:15:25.464 16:35:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:25.464 16:35:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:25.464 16:35:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85716' 00:15:25.464 16:35:02 -- common/autotest_common.sh@955 -- # kill 85716 00:15:25.464 16:35:02 -- common/autotest_common.sh@960 -- # wait 85716 00:15:25.722 16:35:03 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:25.722 16:35:03 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:25.722 16:35:03 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:25.722 16:35:03 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:25.722 16:35:03 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:25.722 16:35:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:25.722 16:35:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:25.722 16:35:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:25.722 16:35:03 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:25.723 00:15:25.723 real 0m20.559s 00:15:25.723 user 1m20.000s 00:15:25.723 sys 0m6.447s 00:15:25.723 16:35:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:25.723 16:35:03 -- common/autotest_common.sh@10 -- # set +x 00:15:25.723 ************************************ 00:15:25.723 END TEST nvmf_multipath 00:15:25.723 ************************************ 00:15:25.723 16:35:03 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:25.723 16:35:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:25.723 16:35:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:25.723 16:35:03 -- common/autotest_common.sh@10 -- # set +x 00:15:25.723 ************************************ 00:15:25.723 START TEST nvmf_zcopy 00:15:25.723 ************************************ 00:15:25.723 16:35:03 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:25.723 * Looking for test storage... 00:15:25.723 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:25.723 16:35:03 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:25.723 16:35:03 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:25.723 16:35:03 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:25.982 16:35:03 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:25.982 16:35:03 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:25.982 16:35:03 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:25.982 16:35:03 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:25.982 16:35:03 -- scripts/common.sh@335 -- # IFS=.-: 00:15:25.982 16:35:03 -- scripts/common.sh@335 -- # read -ra ver1 00:15:25.982 16:35:03 -- scripts/common.sh@336 -- # IFS=.-: 00:15:25.982 16:35:03 -- scripts/common.sh@336 -- # read -ra ver2 00:15:25.982 16:35:03 -- scripts/common.sh@337 -- # local 'op=<' 00:15:25.982 16:35:03 -- scripts/common.sh@339 -- # ver1_l=2 00:15:25.982 16:35:03 -- scripts/common.sh@340 -- # ver2_l=1 00:15:25.982 16:35:03 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:25.982 16:35:03 -- scripts/common.sh@343 -- # case "$op" in 00:15:25.982 16:35:03 -- scripts/common.sh@344 -- # : 1 00:15:25.982 16:35:03 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:25.982 16:35:03 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:25.982 16:35:03 -- scripts/common.sh@364 -- # decimal 1 00:15:25.982 16:35:03 -- scripts/common.sh@352 -- # local d=1 00:15:25.982 16:35:03 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:25.982 16:35:03 -- scripts/common.sh@354 -- # echo 1 00:15:25.982 16:35:03 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:25.982 16:35:03 -- scripts/common.sh@365 -- # decimal 2 00:15:25.982 16:35:03 -- scripts/common.sh@352 -- # local d=2 00:15:25.982 16:35:03 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:25.982 16:35:03 -- scripts/common.sh@354 -- # echo 2 00:15:25.982 16:35:03 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:25.982 16:35:03 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:25.982 16:35:03 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:25.982 16:35:03 -- scripts/common.sh@367 -- # return 0 00:15:25.982 16:35:03 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:25.982 16:35:03 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:25.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:25.982 --rc genhtml_branch_coverage=1 00:15:25.982 --rc genhtml_function_coverage=1 00:15:25.982 --rc genhtml_legend=1 00:15:25.982 --rc geninfo_all_blocks=1 00:15:25.982 --rc geninfo_unexecuted_blocks=1 00:15:25.982 00:15:25.982 ' 00:15:25.982 16:35:03 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:25.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:25.982 --rc genhtml_branch_coverage=1 00:15:25.982 --rc genhtml_function_coverage=1 00:15:25.982 --rc genhtml_legend=1 00:15:25.982 --rc geninfo_all_blocks=1 00:15:25.982 --rc geninfo_unexecuted_blocks=1 00:15:25.982 00:15:25.982 ' 00:15:25.982 16:35:03 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:25.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:25.982 --rc genhtml_branch_coverage=1 00:15:25.982 --rc genhtml_function_coverage=1 00:15:25.982 --rc genhtml_legend=1 00:15:25.982 --rc geninfo_all_blocks=1 00:15:25.982 --rc geninfo_unexecuted_blocks=1 00:15:25.982 00:15:25.982 ' 00:15:25.982 16:35:03 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:25.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:25.982 --rc genhtml_branch_coverage=1 00:15:25.982 --rc genhtml_function_coverage=1 00:15:25.982 --rc genhtml_legend=1 00:15:25.982 --rc geninfo_all_blocks=1 00:15:25.982 --rc geninfo_unexecuted_blocks=1 00:15:25.982 00:15:25.982 ' 00:15:25.982 16:35:03 -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:25.982 16:35:03 -- nvmf/common.sh@7 -- # uname -s 00:15:25.982 16:35:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:25.982 16:35:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:25.982 16:35:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:25.982 16:35:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:25.982 16:35:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:25.982 16:35:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:25.982 16:35:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:25.982 16:35:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:25.982 16:35:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:25.982 16:35:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:25.982 16:35:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:15:25.982 16:35:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:15:25.982 16:35:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:25.983 16:35:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:25.983 16:35:03 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:25.983 16:35:03 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:25.983 16:35:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:25.983 16:35:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:25.983 16:35:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:25.983 16:35:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.983 16:35:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.983 16:35:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.983 16:35:03 -- paths/export.sh@5 -- # export PATH 00:15:25.983 16:35:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.983 16:35:03 -- nvmf/common.sh@46 -- # : 0 00:15:25.983 16:35:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:25.983 16:35:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:25.983 16:35:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:25.983 16:35:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:25.983 16:35:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:25.983 16:35:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:25.983 16:35:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:25.983 16:35:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:25.983 16:35:03 -- target/zcopy.sh@12 -- # nvmftestinit 00:15:25.983 16:35:03 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:25.983 16:35:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:25.983 16:35:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:25.983 16:35:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:25.983 16:35:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:25.983 16:35:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:25.983 16:35:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:25.983 16:35:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:25.983 16:35:03 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:25.983 16:35:03 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:25.983 16:35:03 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:25.983 16:35:03 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:25.983 16:35:03 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:25.983 16:35:03 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:25.983 16:35:03 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:25.983 16:35:03 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:25.983 16:35:03 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:25.983 16:35:03 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:25.983 16:35:03 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:25.983 16:35:03 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:25.983 16:35:03 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:25.983 16:35:03 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:25.983 16:35:03 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:25.983 16:35:03 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:25.983 16:35:03 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:25.983 16:35:03 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:25.983 16:35:03 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:25.983 16:35:03 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:25.983 Cannot find device "nvmf_tgt_br" 00:15:25.983 16:35:03 -- nvmf/common.sh@154 -- # true 00:15:25.983 16:35:03 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:25.983 Cannot find device "nvmf_tgt_br2" 00:15:25.983 16:35:03 -- nvmf/common.sh@155 -- # true 00:15:25.983 16:35:03 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:25.983 16:35:03 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:25.983 Cannot find device "nvmf_tgt_br" 00:15:25.983 16:35:03 -- nvmf/common.sh@157 -- # true 00:15:25.983 16:35:03 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:25.983 Cannot find device "nvmf_tgt_br2" 00:15:25.983 16:35:03 -- nvmf/common.sh@158 -- # true 00:15:25.983 16:35:03 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:25.983 16:35:03 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:25.983 16:35:03 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:25.983 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:25.983 16:35:03 -- nvmf/common.sh@161 -- # true 00:15:25.983 16:35:03 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:25.983 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:25.983 16:35:03 -- nvmf/common.sh@162 -- # true 00:15:25.983 16:35:03 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:25.983 16:35:03 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:25.983 16:35:03 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:25.983 16:35:03 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:25.983 16:35:03 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:26.242 16:35:03 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:26.242 16:35:03 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:26.242 16:35:03 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:26.242 16:35:03 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:26.242 16:35:03 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:26.242 16:35:03 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:26.242 16:35:03 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:26.242 16:35:03 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:26.242 16:35:03 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:26.242 16:35:03 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:26.242 16:35:03 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:26.242 16:35:03 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:26.242 16:35:03 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:26.242 16:35:03 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:26.242 16:35:03 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:26.242 16:35:03 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:26.242 16:35:03 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:26.242 16:35:03 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:26.242 16:35:03 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:26.242 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:26.242 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.105 ms 00:15:26.242 00:15:26.242 --- 10.0.0.2 ping statistics --- 00:15:26.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:26.242 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:15:26.242 16:35:03 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:26.242 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:26.242 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:15:26.242 00:15:26.242 --- 10.0.0.3 ping statistics --- 00:15:26.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:26.242 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:15:26.242 16:35:03 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:26.242 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:26.242 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:15:26.242 00:15:26.242 --- 10.0.0.1 ping statistics --- 00:15:26.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:26.242 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:15:26.242 16:35:03 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:26.242 16:35:03 -- nvmf/common.sh@421 -- # return 0 00:15:26.242 16:35:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:26.242 16:35:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:26.242 16:35:03 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:26.242 16:35:03 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:26.242 16:35:03 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:26.242 16:35:03 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:26.242 16:35:03 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:26.242 16:35:03 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:15:26.242 16:35:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:26.242 16:35:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:26.242 16:35:03 -- common/autotest_common.sh@10 -- # set +x 00:15:26.242 16:35:03 -- nvmf/common.sh@469 -- # nvmfpid=86321 00:15:26.242 16:35:03 -- nvmf/common.sh@470 -- # waitforlisten 86321 00:15:26.242 16:35:03 -- common/autotest_common.sh@829 -- # '[' -z 86321 ']' 00:15:26.242 16:35:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:26.242 16:35:03 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:26.242 16:35:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:26.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:26.242 16:35:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:26.242 16:35:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:26.242 16:35:03 -- common/autotest_common.sh@10 -- # set +x 00:15:26.242 [2024-11-16 16:35:03.702570] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:26.242 [2024-11-16 16:35:03.702657] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:26.501 [2024-11-16 16:35:03.842349] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:26.501 [2024-11-16 16:35:03.898951] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:26.501 [2024-11-16 16:35:03.899127] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:26.501 [2024-11-16 16:35:03.899140] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:26.501 [2024-11-16 16:35:03.899148] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:26.501 [2024-11-16 16:35:03.899180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:27.437 16:35:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:27.437 16:35:04 -- common/autotest_common.sh@862 -- # return 0 00:15:27.437 16:35:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:27.437 16:35:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:27.437 16:35:04 -- common/autotest_common.sh@10 -- # set +x 00:15:27.437 16:35:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:27.437 16:35:04 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:15:27.437 16:35:04 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:15:27.437 16:35:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.437 16:35:04 -- common/autotest_common.sh@10 -- # set +x 00:15:27.437 [2024-11-16 16:35:04.784677] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:27.437 16:35:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.437 16:35:04 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:27.437 16:35:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.437 16:35:04 -- common/autotest_common.sh@10 -- # set +x 00:15:27.437 16:35:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.437 16:35:04 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:27.437 16:35:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.437 16:35:04 -- common/autotest_common.sh@10 -- # set +x 00:15:27.437 [2024-11-16 16:35:04.800784] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:27.437 16:35:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.437 16:35:04 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:27.437 16:35:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.437 16:35:04 -- common/autotest_common.sh@10 -- # set +x 00:15:27.437 16:35:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.437 16:35:04 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:15:27.437 16:35:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.437 16:35:04 -- common/autotest_common.sh@10 -- # set +x 00:15:27.437 malloc0 00:15:27.437 16:35:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.437 16:35:04 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:27.437 16:35:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.437 16:35:04 -- common/autotest_common.sh@10 -- # set +x 00:15:27.437 16:35:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.437 16:35:04 -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:15:27.437 16:35:04 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:15:27.437 16:35:04 -- nvmf/common.sh@520 -- # config=() 00:15:27.437 16:35:04 -- nvmf/common.sh@520 -- # local subsystem config 00:15:27.437 16:35:04 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:27.437 16:35:04 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:27.437 { 00:15:27.437 "params": { 00:15:27.437 "name": "Nvme$subsystem", 00:15:27.437 "trtype": "$TEST_TRANSPORT", 00:15:27.437 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:27.437 "adrfam": "ipv4", 00:15:27.437 "trsvcid": "$NVMF_PORT", 00:15:27.437 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:27.437 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:27.437 "hdgst": ${hdgst:-false}, 00:15:27.437 "ddgst": ${ddgst:-false} 00:15:27.437 }, 00:15:27.437 "method": "bdev_nvme_attach_controller" 00:15:27.437 } 00:15:27.437 EOF 00:15:27.437 )") 00:15:27.437 16:35:04 -- nvmf/common.sh@542 -- # cat 00:15:27.437 16:35:04 -- nvmf/common.sh@544 -- # jq . 00:15:27.437 16:35:04 -- nvmf/common.sh@545 -- # IFS=, 00:15:27.437 16:35:04 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:27.437 "params": { 00:15:27.437 "name": "Nvme1", 00:15:27.437 "trtype": "tcp", 00:15:27.437 "traddr": "10.0.0.2", 00:15:27.437 "adrfam": "ipv4", 00:15:27.437 "trsvcid": "4420", 00:15:27.437 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:27.437 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:27.437 "hdgst": false, 00:15:27.437 "ddgst": false 00:15:27.437 }, 00:15:27.437 "method": "bdev_nvme_attach_controller" 00:15:27.437 }' 00:15:27.437 [2024-11-16 16:35:04.894430] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:27.437 [2024-11-16 16:35:04.894546] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86372 ] 00:15:27.696 [2024-11-16 16:35:05.037359] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:27.696 [2024-11-16 16:35:05.110976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:27.955 Running I/O for 10 seconds... 00:15:37.933 00:15:37.933 Latency(us) 00:15:37.933 [2024-11-16T16:35:15.424Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:37.933 [2024-11-16T16:35:15.424Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:15:37.933 Verification LBA range: start 0x0 length 0x1000 00:15:37.933 Nvme1n1 : 10.01 11394.78 89.02 0.00 0.00 11206.61 793.13 18826.71 00:15:37.933 [2024-11-16T16:35:15.424Z] =================================================================================================================== 00:15:37.933 [2024-11-16T16:35:15.424Z] Total : 11394.78 89.02 0.00 0.00 11206.61 793.13 18826.71 00:15:38.192 16:35:15 -- target/zcopy.sh@39 -- # perfpid=86484 00:15:38.192 16:35:15 -- target/zcopy.sh@41 -- # xtrace_disable 00:15:38.192 16:35:15 -- common/autotest_common.sh@10 -- # set +x 00:15:38.192 16:35:15 -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:15:38.192 16:35:15 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:15:38.192 16:35:15 -- nvmf/common.sh@520 -- # config=() 00:15:38.192 16:35:15 -- nvmf/common.sh@520 -- # local subsystem config 00:15:38.192 16:35:15 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:38.192 16:35:15 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:38.192 { 00:15:38.192 "params": { 00:15:38.192 "name": "Nvme$subsystem", 00:15:38.192 "trtype": "$TEST_TRANSPORT", 00:15:38.192 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:38.192 "adrfam": "ipv4", 00:15:38.192 "trsvcid": "$NVMF_PORT", 00:15:38.192 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:38.192 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:38.192 "hdgst": ${hdgst:-false}, 00:15:38.192 "ddgst": ${ddgst:-false} 00:15:38.192 }, 00:15:38.192 "method": "bdev_nvme_attach_controller" 00:15:38.192 } 00:15:38.192 EOF 00:15:38.192 )") 00:15:38.192 16:35:15 -- nvmf/common.sh@542 -- # cat 00:15:38.192 16:35:15 -- nvmf/common.sh@544 -- # jq . 00:15:38.192 [2024-11-16 16:35:15.573155] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.192 [2024-11-16 16:35:15.573196] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.192 16:35:15 -- nvmf/common.sh@545 -- # IFS=, 00:15:38.192 16:35:15 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:38.192 "params": { 00:15:38.192 "name": "Nvme1", 00:15:38.192 "trtype": "tcp", 00:15:38.192 "traddr": "10.0.0.2", 00:15:38.192 "adrfam": "ipv4", 00:15:38.192 "trsvcid": "4420", 00:15:38.192 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:38.192 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:38.192 "hdgst": false, 00:15:38.192 "ddgst": false 00:15:38.192 }, 00:15:38.192 "method": "bdev_nvme_attach_controller" 00:15:38.192 }' 00:15:38.192 2024/11/16 16:35:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.192 [2024-11-16 16:35:15.585124] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.192 [2024-11-16 16:35:15.585168] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.192 2024/11/16 16:35:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.192 [2024-11-16 16:35:15.597124] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.192 [2024-11-16 16:35:15.597166] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.193 2024/11/16 16:35:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.193 [2024-11-16 16:35:15.609122] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.193 [2024-11-16 16:35:15.609173] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.193 2024/11/16 16:35:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.193 [2024-11-16 16:35:15.621131] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.193 [2024-11-16 16:35:15.621154] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.193 [2024-11-16 16:35:15.622697] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:38.193 [2024-11-16 16:35:15.622778] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86484 ] 00:15:38.193 2024/11/16 16:35:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.193 [2024-11-16 16:35:15.633134] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.193 [2024-11-16 16:35:15.633168] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.193 2024/11/16 16:35:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.193 [2024-11-16 16:35:15.645140] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.193 [2024-11-16 16:35:15.645166] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.193 2024/11/16 16:35:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.193 [2024-11-16 16:35:15.657144] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.193 [2024-11-16 16:35:15.657170] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.193 2024/11/16 16:35:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.193 [2024-11-16 16:35:15.669137] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.193 [2024-11-16 16:35:15.669161] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.193 2024/11/16 16:35:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.193 [2024-11-16 16:35:15.681158] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.193 [2024-11-16 16:35:15.681185] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.452 2024/11/16 16:35:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.452 [2024-11-16 16:35:15.693151] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.452 [2024-11-16 16:35:15.693178] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.452 2024/11/16 16:35:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.452 [2024-11-16 16:35:15.705153] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.452 [2024-11-16 16:35:15.705179] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.452 2024/11/16 16:35:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.452 [2024-11-16 16:35:15.717146] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.452 [2024-11-16 16:35:15.717170] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.452 2024/11/16 16:35:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.452 [2024-11-16 16:35:15.729158] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.452 [2024-11-16 16:35:15.729186] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.452 2024/11/16 16:35:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.452 [2024-11-16 16:35:15.741154] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.452 [2024-11-16 16:35:15.741181] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.452 2024/11/16 16:35:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.452 [2024-11-16 16:35:15.753163] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.452 [2024-11-16 16:35:15.753185] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.452 2024/11/16 16:35:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.452 [2024-11-16 16:35:15.762649] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:38.452 [2024-11-16 16:35:15.765163] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.452 [2024-11-16 16:35:15.765185] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.452 2024/11/16 16:35:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.452 [2024-11-16 16:35:15.777165] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.452 [2024-11-16 16:35:15.777188] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.452 2024/11/16 16:35:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.452 [2024-11-16 16:35:15.789173] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.452 [2024-11-16 16:35:15.789198] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.453 2024/11/16 16:35:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.453 [2024-11-16 16:35:15.801179] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.453 [2024-11-16 16:35:15.801218] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.453 2024/11/16 16:35:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.453 [2024-11-16 16:35:15.813182] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.453 [2024-11-16 16:35:15.813203] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.453 2024/11/16 16:35:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.453 [2024-11-16 16:35:15.823004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.453 [2024-11-16 16:35:15.825184] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.453 [2024-11-16 16:35:15.825216] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.453 2024/11/16 16:35:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.453 [2024-11-16 16:35:15.837194] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.453 [2024-11-16 16:35:15.837216] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.453 2024/11/16 16:35:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.453 [2024-11-16 16:35:15.849189] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.453 [2024-11-16 16:35:15.849210] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.453 2024/11/16 16:35:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.453 [2024-11-16 16:35:15.861192] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.453 [2024-11-16 16:35:15.861212] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.453 2024/11/16 16:35:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.453 [2024-11-16 16:35:15.873216] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.453 [2024-11-16 16:35:15.873242] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.453 2024/11/16 16:35:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.453 [2024-11-16 16:35:15.885200] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.453 [2024-11-16 16:35:15.885221] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.453 2024/11/16 16:35:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.453 [2024-11-16 16:35:15.897203] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.453 [2024-11-16 16:35:15.897224] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.453 2024/11/16 16:35:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.453 [2024-11-16 16:35:15.909206] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.453 [2024-11-16 16:35:15.909227] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.453 2024/11/16 16:35:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.453 [2024-11-16 16:35:15.921214] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.453 [2024-11-16 16:35:15.921242] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.453 2024/11/16 16:35:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.453 [2024-11-16 16:35:15.933212] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.453 [2024-11-16 16:35:15.933232] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.453 2024/11/16 16:35:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.713 [2024-11-16 16:35:15.945223] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.713 [2024-11-16 16:35:15.945262] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.713 2024/11/16 16:35:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.713 [2024-11-16 16:35:15.957261] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.713 [2024-11-16 16:35:15.957290] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.713 2024/11/16 16:35:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.713 [2024-11-16 16:35:15.965236] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.713 [2024-11-16 16:35:15.965292] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.713 2024/11/16 16:35:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.713 [2024-11-16 16:35:15.977246] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.713 [2024-11-16 16:35:15.977288] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.713 2024/11/16 16:35:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.713 [2024-11-16 16:35:15.989246] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.713 [2024-11-16 16:35:15.989288] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.713 2024/11/16 16:35:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.713 [2024-11-16 16:35:16.001248] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.713 [2024-11-16 16:35:16.001291] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.713 2024/11/16 16:35:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.713 [2024-11-16 16:35:16.013274] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.713 [2024-11-16 16:35:16.013303] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.713 2024/11/16 16:35:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.713 Running I/O for 5 seconds... 00:15:38.713 [2024-11-16 16:35:16.025253] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.713 [2024-11-16 16:35:16.025294] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.713 2024/11/16 16:35:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.713 [2024-11-16 16:35:16.040949] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.713 [2024-11-16 16:35:16.040995] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.713 2024/11/16 16:35:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.713 [2024-11-16 16:35:16.056666] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.713 [2024-11-16 16:35:16.056696] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.713 2024/11/16 16:35:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.713 [2024-11-16 16:35:16.073011] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.713 [2024-11-16 16:35:16.073041] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.713 2024/11/16 16:35:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.713 [2024-11-16 16:35:16.089163] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.713 [2024-11-16 16:35:16.089193] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.713 2024/11/16 16:35:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.713 [2024-11-16 16:35:16.105870] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.713 [2024-11-16 16:35:16.105899] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.713 2024/11/16 16:35:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.713 [2024-11-16 16:35:16.121415] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.713 [2024-11-16 16:35:16.121445] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.713 2024/11/16 16:35:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.713 [2024-11-16 16:35:16.137997] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.713 [2024-11-16 16:35:16.138026] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.713 2024/11/16 16:35:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.713 [2024-11-16 16:35:16.154105] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.713 [2024-11-16 16:35:16.154134] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.713 2024/11/16 16:35:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.714 [2024-11-16 16:35:16.170050] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.714 [2024-11-16 16:35:16.170088] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.714 2024/11/16 16:35:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.714 [2024-11-16 16:35:16.184671] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.714 [2024-11-16 16:35:16.184701] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.714 2024/11/16 16:35:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.714 [2024-11-16 16:35:16.196424] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.714 [2024-11-16 16:35:16.196452] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.714 2024/11/16 16:35:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.974 [2024-11-16 16:35:16.211230] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.974 [2024-11-16 16:35:16.211258] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.974 2024/11/16 16:35:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.974 [2024-11-16 16:35:16.227358] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.974 [2024-11-16 16:35:16.227387] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.974 2024/11/16 16:35:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.974 [2024-11-16 16:35:16.243083] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.974 [2024-11-16 16:35:16.243112] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.974 2024/11/16 16:35:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.974 [2024-11-16 16:35:16.254889] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.974 [2024-11-16 16:35:16.254917] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.974 2024/11/16 16:35:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.974 [2024-11-16 16:35:16.269837] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.974 [2024-11-16 16:35:16.269866] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.974 2024/11/16 16:35:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.974 [2024-11-16 16:35:16.286312] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.974 [2024-11-16 16:35:16.286341] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.974 2024/11/16 16:35:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.974 [2024-11-16 16:35:16.302903] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.974 [2024-11-16 16:35:16.302932] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.974 2024/11/16 16:35:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.974 [2024-11-16 16:35:16.318447] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.974 [2024-11-16 16:35:16.318476] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.974 2024/11/16 16:35:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.974 [2024-11-16 16:35:16.333418] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.974 [2024-11-16 16:35:16.333448] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.974 2024/11/16 16:35:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.974 [2024-11-16 16:35:16.349298] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.974 [2024-11-16 16:35:16.349327] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.974 2024/11/16 16:35:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.974 [2024-11-16 16:35:16.360669] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.974 [2024-11-16 16:35:16.360697] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.974 2024/11/16 16:35:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.974 [2024-11-16 16:35:16.376200] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.974 [2024-11-16 16:35:16.376231] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.974 2024/11/16 16:35:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.974 [2024-11-16 16:35:16.391884] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.974 [2024-11-16 16:35:16.391912] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.974 2024/11/16 16:35:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.974 [2024-11-16 16:35:16.407400] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.974 [2024-11-16 16:35:16.407431] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.974 2024/11/16 16:35:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.974 [2024-11-16 16:35:16.424130] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.974 [2024-11-16 16:35:16.424158] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.974 2024/11/16 16:35:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.974 [2024-11-16 16:35:16.439911] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.974 [2024-11-16 16:35:16.439941] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.974 2024/11/16 16:35:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.974 [2024-11-16 16:35:16.451952] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.974 [2024-11-16 16:35:16.451981] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.974 2024/11/16 16:35:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.234 [2024-11-16 16:35:16.467530] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.234 [2024-11-16 16:35:16.467559] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.234 2024/11/16 16:35:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.234 [2024-11-16 16:35:16.484008] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.234 [2024-11-16 16:35:16.484036] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.234 2024/11/16 16:35:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.234 [2024-11-16 16:35:16.499803] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.234 [2024-11-16 16:35:16.499832] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.234 2024/11/16 16:35:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.234 [2024-11-16 16:35:16.511983] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.234 [2024-11-16 16:35:16.512011] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.234 2024/11/16 16:35:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.234 [2024-11-16 16:35:16.526369] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.234 [2024-11-16 16:35:16.526398] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.234 2024/11/16 16:35:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.234 [2024-11-16 16:35:16.541473] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.234 [2024-11-16 16:35:16.541501] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.234 2024/11/16 16:35:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.234 [2024-11-16 16:35:16.557678] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.234 [2024-11-16 16:35:16.557706] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.234 2024/11/16 16:35:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.234 [2024-11-16 16:35:16.569401] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.234 [2024-11-16 16:35:16.569429] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.234 2024/11/16 16:35:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.234 [2024-11-16 16:35:16.584477] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.234 [2024-11-16 16:35:16.584505] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.234 2024/11/16 16:35:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.234 [2024-11-16 16:35:16.600411] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.234 [2024-11-16 16:35:16.600440] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.234 2024/11/16 16:35:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.234 [2024-11-16 16:35:16.614249] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.234 [2024-11-16 16:35:16.614277] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.234 2024/11/16 16:35:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.234 [2024-11-16 16:35:16.629054] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.234 [2024-11-16 16:35:16.629094] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.234 2024/11/16 16:35:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.234 [2024-11-16 16:35:16.645353] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.234 [2024-11-16 16:35:16.645381] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.234 2024/11/16 16:35:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.234 [2024-11-16 16:35:16.657051] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.234 [2024-11-16 16:35:16.657107] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.234 2024/11/16 16:35:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.234 [2024-11-16 16:35:16.671868] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.234 [2024-11-16 16:35:16.671897] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.234 2024/11/16 16:35:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.234 [2024-11-16 16:35:16.683375] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.234 [2024-11-16 16:35:16.683406] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.235 2024/11/16 16:35:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.235 [2024-11-16 16:35:16.699498] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.235 [2024-11-16 16:35:16.699529] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.235 2024/11/16 16:35:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.235 [2024-11-16 16:35:16.715183] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.235 [2024-11-16 16:35:16.715213] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.235 2024/11/16 16:35:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.494 [2024-11-16 16:35:16.732580] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.494 [2024-11-16 16:35:16.732610] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.494 2024/11/16 16:35:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.494 [2024-11-16 16:35:16.748685] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.494 [2024-11-16 16:35:16.748713] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.494 2024/11/16 16:35:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.494 [2024-11-16 16:35:16.765131] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.494 [2024-11-16 16:35:16.765162] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.494 2024/11/16 16:35:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.494 [2024-11-16 16:35:16.781073] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.494 [2024-11-16 16:35:16.781114] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.494 2024/11/16 16:35:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.494 [2024-11-16 16:35:16.798631] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.494 [2024-11-16 16:35:16.798662] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.494 2024/11/16 16:35:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.494 [2024-11-16 16:35:16.815776] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.494 [2024-11-16 16:35:16.815806] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.494 2024/11/16 16:35:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.494 [2024-11-16 16:35:16.832079] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.494 [2024-11-16 16:35:16.832118] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.494 2024/11/16 16:35:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.495 [2024-11-16 16:35:16.847842] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.495 [2024-11-16 16:35:16.847871] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.495 2024/11/16 16:35:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.495 [2024-11-16 16:35:16.864513] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.495 [2024-11-16 16:35:16.864543] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.495 2024/11/16 16:35:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.495 [2024-11-16 16:35:16.881237] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.495 [2024-11-16 16:35:16.881310] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.495 2024/11/16 16:35:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.495 [2024-11-16 16:35:16.897654] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.495 [2024-11-16 16:35:16.897682] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.495 2024/11/16 16:35:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.495 [2024-11-16 16:35:16.914143] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.495 [2024-11-16 16:35:16.914179] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.495 2024/11/16 16:35:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.495 [2024-11-16 16:35:16.930493] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.495 [2024-11-16 16:35:16.930521] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.495 2024/11/16 16:35:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.495 [2024-11-16 16:35:16.947864] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.495 [2024-11-16 16:35:16.947892] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.495 2024/11/16 16:35:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.495 [2024-11-16 16:35:16.963745] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.495 [2024-11-16 16:35:16.963773] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.495 2024/11/16 16:35:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.495 [2024-11-16 16:35:16.979944] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.495 [2024-11-16 16:35:16.979973] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.495 2024/11/16 16:35:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.754 [2024-11-16 16:35:16.994351] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.754 [2024-11-16 16:35:16.994380] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.754 2024/11/16 16:35:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.754 [2024-11-16 16:35:17.009588] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.754 [2024-11-16 16:35:17.009616] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.754 2024/11/16 16:35:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.754 [2024-11-16 16:35:17.026170] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.754 [2024-11-16 16:35:17.026200] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.754 2024/11/16 16:35:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.754 [2024-11-16 16:35:17.042342] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.754 [2024-11-16 16:35:17.042372] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.754 2024/11/16 16:35:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.754 [2024-11-16 16:35:17.057702] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.754 [2024-11-16 16:35:17.057732] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.754 2024/11/16 16:35:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.754 [2024-11-16 16:35:17.072237] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.754 [2024-11-16 16:35:17.072267] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.754 2024/11/16 16:35:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.754 [2024-11-16 16:35:17.083858] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.754 [2024-11-16 16:35:17.083887] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.754 2024/11/16 16:35:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.754 [2024-11-16 16:35:17.099385] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.754 [2024-11-16 16:35:17.099413] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.754 2024/11/16 16:35:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.754 [2024-11-16 16:35:17.115488] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.754 [2024-11-16 16:35:17.115516] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.754 2024/11/16 16:35:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.754 [2024-11-16 16:35:17.130986] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.754 [2024-11-16 16:35:17.131015] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.754 2024/11/16 16:35:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.754 [2024-11-16 16:35:17.145786] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.754 [2024-11-16 16:35:17.145815] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.754 2024/11/16 16:35:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.754 [2024-11-16 16:35:17.162382] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.754 [2024-11-16 16:35:17.162412] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.754 2024/11/16 16:35:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.754 [2024-11-16 16:35:17.178086] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.754 [2024-11-16 16:35:17.178126] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.754 2024/11/16 16:35:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.754 [2024-11-16 16:35:17.192914] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.754 [2024-11-16 16:35:17.192971] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.754 2024/11/16 16:35:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.754 [2024-11-16 16:35:17.209574] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.754 [2024-11-16 16:35:17.209603] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.754 2024/11/16 16:35:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.754 [2024-11-16 16:35:17.225128] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.754 [2024-11-16 16:35:17.225156] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.754 2024/11/16 16:35:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.754 [2024-11-16 16:35:17.239814] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.754 [2024-11-16 16:35:17.239859] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.014 2024/11/16 16:35:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.014 [2024-11-16 16:35:17.257063] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.014 [2024-11-16 16:35:17.257102] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.014 2024/11/16 16:35:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.014 [2024-11-16 16:35:17.273668] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.014 [2024-11-16 16:35:17.273697] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.014 2024/11/16 16:35:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.014 [2024-11-16 16:35:17.289805] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.014 [2024-11-16 16:35:17.289834] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.014 2024/11/16 16:35:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.014 [2024-11-16 16:35:17.305296] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.014 [2024-11-16 16:35:17.305324] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.014 2024/11/16 16:35:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.014 [2024-11-16 16:35:17.319784] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.014 [2024-11-16 16:35:17.319812] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.014 2024/11/16 16:35:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.014 [2024-11-16 16:35:17.335308] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.014 [2024-11-16 16:35:17.335339] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.014 2024/11/16 16:35:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.014 [2024-11-16 16:35:17.350236] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.014 [2024-11-16 16:35:17.350264] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.014 2024/11/16 16:35:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.014 [2024-11-16 16:35:17.366146] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.014 [2024-11-16 16:35:17.366174] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.014 2024/11/16 16:35:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.014 [2024-11-16 16:35:17.377653] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.014 [2024-11-16 16:35:17.377681] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.014 2024/11/16 16:35:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.014 [2024-11-16 16:35:17.393753] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.014 [2024-11-16 16:35:17.393783] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.014 2024/11/16 16:35:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.014 [2024-11-16 16:35:17.409105] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.014 [2024-11-16 16:35:17.409136] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.014 2024/11/16 16:35:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.014 [2024-11-16 16:35:17.423661] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.014 [2024-11-16 16:35:17.423690] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.014 2024/11/16 16:35:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.014 [2024-11-16 16:35:17.440014] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.014 [2024-11-16 16:35:17.440044] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.014 2024/11/16 16:35:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.014 [2024-11-16 16:35:17.451933] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.014 [2024-11-16 16:35:17.451962] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.014 2024/11/16 16:35:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.014 [2024-11-16 16:35:17.468004] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.014 [2024-11-16 16:35:17.468033] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.014 2024/11/16 16:35:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.014 [2024-11-16 16:35:17.483510] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.014 [2024-11-16 16:35:17.483539] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.014 2024/11/16 16:35:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.014 [2024-11-16 16:35:17.498715] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.014 [2024-11-16 16:35:17.498745] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.014 2024/11/16 16:35:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.273 [2024-11-16 16:35:17.515574] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.273 [2024-11-16 16:35:17.515602] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.274 2024/11/16 16:35:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.274 [2024-11-16 16:35:17.531823] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.274 [2024-11-16 16:35:17.531852] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.274 2024/11/16 16:35:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.274 [2024-11-16 16:35:17.548545] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.274 [2024-11-16 16:35:17.548574] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.274 2024/11/16 16:35:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.274 [2024-11-16 16:35:17.564395] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.274 [2024-11-16 16:35:17.564425] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.274 2024/11/16 16:35:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.274 [2024-11-16 16:35:17.578468] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.274 [2024-11-16 16:35:17.578497] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.274 2024/11/16 16:35:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.274 [2024-11-16 16:35:17.589976] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.274 [2024-11-16 16:35:17.590005] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.274 2024/11/16 16:35:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.274 [2024-11-16 16:35:17.604985] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.274 [2024-11-16 16:35:17.605024] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.274 2024/11/16 16:35:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.274 [2024-11-16 16:35:17.620352] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.274 [2024-11-16 16:35:17.620383] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.274 2024/11/16 16:35:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.274 [2024-11-16 16:35:17.635537] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.274 [2024-11-16 16:35:17.635567] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.274 2024/11/16 16:35:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.274 [2024-11-16 16:35:17.651375] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.274 [2024-11-16 16:35:17.651403] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.274 2024/11/16 16:35:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.274 [2024-11-16 16:35:17.662621] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.274 [2024-11-16 16:35:17.662650] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.274 2024/11/16 16:35:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.274 [2024-11-16 16:35:17.678745] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.274 [2024-11-16 16:35:17.678777] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.274 2024/11/16 16:35:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.274 [2024-11-16 16:35:17.694121] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.274 [2024-11-16 16:35:17.694147] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.274 2024/11/16 16:35:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.274 [2024-11-16 16:35:17.708445] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.274 [2024-11-16 16:35:17.708475] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.274 2024/11/16 16:35:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.274 [2024-11-16 16:35:17.720245] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.274 [2024-11-16 16:35:17.720274] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.274 2024/11/16 16:35:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.274 [2024-11-16 16:35:17.735338] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.274 [2024-11-16 16:35:17.735368] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.274 2024/11/16 16:35:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.274 [2024-11-16 16:35:17.751166] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.274 [2024-11-16 16:35:17.751194] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.274 2024/11/16 16:35:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.535 [2024-11-16 16:35:17.764488] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.535 [2024-11-16 16:35:17.764516] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.535 2024/11/16 16:35:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.535 [2024-11-16 16:35:17.780814] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.535 [2024-11-16 16:35:17.780843] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.535 2024/11/16 16:35:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.535 [2024-11-16 16:35:17.797102] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.535 [2024-11-16 16:35:17.797137] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.535 2024/11/16 16:35:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.535 [2024-11-16 16:35:17.813214] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.535 [2024-11-16 16:35:17.813263] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.535 2024/11/16 16:35:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.535 [2024-11-16 16:35:17.830105] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.535 [2024-11-16 16:35:17.830145] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.535 2024/11/16 16:35:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.535 [2024-11-16 16:35:17.846378] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.535 [2024-11-16 16:35:17.846425] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.535 2024/11/16 16:35:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.535 [2024-11-16 16:35:17.862625] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.535 [2024-11-16 16:35:17.862656] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.535 2024/11/16 16:35:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.535 [2024-11-16 16:35:17.878565] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.535 [2024-11-16 16:35:17.878594] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.535 2024/11/16 16:35:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.535 [2024-11-16 16:35:17.890194] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.535 [2024-11-16 16:35:17.890222] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.535 2024/11/16 16:35:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.535 [2024-11-16 16:35:17.905879] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.535 [2024-11-16 16:35:17.905910] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.535 2024/11/16 16:35:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.536 [2024-11-16 16:35:17.921878] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.536 [2024-11-16 16:35:17.921907] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.536 2024/11/16 16:35:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.536 [2024-11-16 16:35:17.938863] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.536 [2024-11-16 16:35:17.938893] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.536 2024/11/16 16:35:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.536 [2024-11-16 16:35:17.955455] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.536 [2024-11-16 16:35:17.955484] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.536 2024/11/16 16:35:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.536 [2024-11-16 16:35:17.979594] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.536 [2024-11-16 16:35:17.979625] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.536 2024/11/16 16:35:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.536 [2024-11-16 16:35:17.990642] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.536 [2024-11-16 16:35:17.990670] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.536 2024/11/16 16:35:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.536 [2024-11-16 16:35:18.005992] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.536 [2024-11-16 16:35:18.006021] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.536 2024/11/16 16:35:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.805 [2024-11-16 16:35:18.023422] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.805 [2024-11-16 16:35:18.023457] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.805 2024/11/16 16:35:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.805 [2024-11-16 16:35:18.038631] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.805 [2024-11-16 16:35:18.038663] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.805 2024/11/16 16:35:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.805 [2024-11-16 16:35:18.050087] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.805 [2024-11-16 16:35:18.050145] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.805 2024/11/16 16:35:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.805 [2024-11-16 16:35:18.066877] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.805 [2024-11-16 16:35:18.066908] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.805 2024/11/16 16:35:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.805 [2024-11-16 16:35:18.082332] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.805 [2024-11-16 16:35:18.082360] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.805 2024/11/16 16:35:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.805 [2024-11-16 16:35:18.093803] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.805 [2024-11-16 16:35:18.093831] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.805 2024/11/16 16:35:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.805 [2024-11-16 16:35:18.109803] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.805 [2024-11-16 16:35:18.109832] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.805 2024/11/16 16:35:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.805 [2024-11-16 16:35:18.126287] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.805 [2024-11-16 16:35:18.126315] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.805 2024/11/16 16:35:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.805 [2024-11-16 16:35:18.142912] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.805 [2024-11-16 16:35:18.142941] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.805 2024/11/16 16:35:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.805 [2024-11-16 16:35:18.159719] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.805 [2024-11-16 16:35:18.159750] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.805 2024/11/16 16:35:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.805 [2024-11-16 16:35:18.175796] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.805 [2024-11-16 16:35:18.175825] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.805 2024/11/16 16:35:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.805 [2024-11-16 16:35:18.193016] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.805 [2024-11-16 16:35:18.193047] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.805 2024/11/16 16:35:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.805 [2024-11-16 16:35:18.209992] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.805 [2024-11-16 16:35:18.210022] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.805 2024/11/16 16:35:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.805 [2024-11-16 16:35:18.225691] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.805 [2024-11-16 16:35:18.225721] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.805 2024/11/16 16:35:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.805 [2024-11-16 16:35:18.239830] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.805 [2024-11-16 16:35:18.239859] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.805 2024/11/16 16:35:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.805 [2024-11-16 16:35:18.251719] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.805 [2024-11-16 16:35:18.251747] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.805 2024/11/16 16:35:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.805 [2024-11-16 16:35:18.267211] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.805 [2024-11-16 16:35:18.267241] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.805 2024/11/16 16:35:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.805 [2024-11-16 16:35:18.283496] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.805 [2024-11-16 16:35:18.283528] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.805 2024/11/16 16:35:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.091 [2024-11-16 16:35:18.300422] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.091 [2024-11-16 16:35:18.300451] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.091 2024/11/16 16:35:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.091 [2024-11-16 16:35:18.317075] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.091 [2024-11-16 16:35:18.317116] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.091 2024/11/16 16:35:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.091 [2024-11-16 16:35:18.333722] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.091 [2024-11-16 16:35:18.333750] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.091 2024/11/16 16:35:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.091 [2024-11-16 16:35:18.349353] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.091 [2024-11-16 16:35:18.349382] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.091 2024/11/16 16:35:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.091 [2024-11-16 16:35:18.363643] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.091 [2024-11-16 16:35:18.363671] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.091 2024/11/16 16:35:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.091 [2024-11-16 16:35:18.375407] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.091 [2024-11-16 16:35:18.375436] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.091 2024/11/16 16:35:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.091 [2024-11-16 16:35:18.391390] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.091 [2024-11-16 16:35:18.391421] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.091 2024/11/16 16:35:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.091 [2024-11-16 16:35:18.407312] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.091 [2024-11-16 16:35:18.407340] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.091 2024/11/16 16:35:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.091 [2024-11-16 16:35:18.418836] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.091 [2024-11-16 16:35:18.418865] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.091 2024/11/16 16:35:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.091 [2024-11-16 16:35:18.433803] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.091 [2024-11-16 16:35:18.433831] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.091 2024/11/16 16:35:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.091 [2024-11-16 16:35:18.449663] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.091 [2024-11-16 16:35:18.449691] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.091 2024/11/16 16:35:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.091 [2024-11-16 16:35:18.465409] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.091 [2024-11-16 16:35:18.465437] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.091 2024/11/16 16:35:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.091 [2024-11-16 16:35:18.481241] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.091 [2024-11-16 16:35:18.481286] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.091 2024/11/16 16:35:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.091 [2024-11-16 16:35:18.495505] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.091 [2024-11-16 16:35:18.495534] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.091 2024/11/16 16:35:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.091 [2024-11-16 16:35:18.506771] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.091 [2024-11-16 16:35:18.506800] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.091 2024/11/16 16:35:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.092 [2024-11-16 16:35:18.522028] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.092 [2024-11-16 16:35:18.522067] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.092 2024/11/16 16:35:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.092 [2024-11-16 16:35:18.537588] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.092 [2024-11-16 16:35:18.537619] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.092 2024/11/16 16:35:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.092 [2024-11-16 16:35:18.552405] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.092 [2024-11-16 16:35:18.552433] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.092 2024/11/16 16:35:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.092 [2024-11-16 16:35:18.569659] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.092 [2024-11-16 16:35:18.569691] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.363 2024/11/16 16:35:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.363 [2024-11-16 16:35:18.584321] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.363 [2024-11-16 16:35:18.584365] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.363 2024/11/16 16:35:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.363 [2024-11-16 16:35:18.599634] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.363 [2024-11-16 16:35:18.599663] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.363 2024/11/16 16:35:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.363 [2024-11-16 16:35:18.615737] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.363 [2024-11-16 16:35:18.615767] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.363 2024/11/16 16:35:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.363 [2024-11-16 16:35:18.631851] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.363 [2024-11-16 16:35:18.631880] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.363 2024/11/16 16:35:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.363 [2024-11-16 16:35:18.643885] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.363 [2024-11-16 16:35:18.643913] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.363 2024/11/16 16:35:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.363 [2024-11-16 16:35:18.659235] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.363 [2024-11-16 16:35:18.659264] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.363 2024/11/16 16:35:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.363 [2024-11-16 16:35:18.675513] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.363 [2024-11-16 16:35:18.675541] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.363 2024/11/16 16:35:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.363 [2024-11-16 16:35:18.691083] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.363 [2024-11-16 16:35:18.691111] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.363 2024/11/16 16:35:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.363 [2024-11-16 16:35:18.705644] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.363 [2024-11-16 16:35:18.705672] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.363 2024/11/16 16:35:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.363 [2024-11-16 16:35:18.716640] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.363 [2024-11-16 16:35:18.716816] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.363 2024/11/16 16:35:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.363 [2024-11-16 16:35:18.732973] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.363 [2024-11-16 16:35:18.733152] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.363 2024/11/16 16:35:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.363 [2024-11-16 16:35:18.749038] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.363 [2024-11-16 16:35:18.749204] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.363 2024/11/16 16:35:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.363 [2024-11-16 16:35:18.765129] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.363 [2024-11-16 16:35:18.765303] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.363 2024/11/16 16:35:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.363 [2024-11-16 16:35:18.781711] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.364 [2024-11-16 16:35:18.781849] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.364 2024/11/16 16:35:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.364 [2024-11-16 16:35:18.798868] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.364 [2024-11-16 16:35:18.799008] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.364 2024/11/16 16:35:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.364 [2024-11-16 16:35:18.814887] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.364 [2024-11-16 16:35:18.814917] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.364 2024/11/16 16:35:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.364 [2024-11-16 16:35:18.830962] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.364 [2024-11-16 16:35:18.830993] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.364 2024/11/16 16:35:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.364 [2024-11-16 16:35:18.844433] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.364 [2024-11-16 16:35:18.844621] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.364 2024/11/16 16:35:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.623 [2024-11-16 16:35:18.860670] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.623 [2024-11-16 16:35:18.860699] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.623 2024/11/16 16:35:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.623 [2024-11-16 16:35:18.876615] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.623 [2024-11-16 16:35:18.876645] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.623 2024/11/16 16:35:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.623 [2024-11-16 16:35:18.889028] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.623 [2024-11-16 16:35:18.889072] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.623 2024/11/16 16:35:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.623 [2024-11-16 16:35:18.900648] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.623 [2024-11-16 16:35:18.900677] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.623 2024/11/16 16:35:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.623 [2024-11-16 16:35:18.915553] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.623 [2024-11-16 16:35:18.915694] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.623 2024/11/16 16:35:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.623 [2024-11-16 16:35:18.932020] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.623 [2024-11-16 16:35:18.932193] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.623 2024/11/16 16:35:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.623 [2024-11-16 16:35:18.948497] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.623 [2024-11-16 16:35:18.948639] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.623 2024/11/16 16:35:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.623 [2024-11-16 16:35:18.965093] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.623 [2024-11-16 16:35:18.965283] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.623 2024/11/16 16:35:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.623 [2024-11-16 16:35:18.981418] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.623 [2024-11-16 16:35:18.981556] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.623 2024/11/16 16:35:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.623 [2024-11-16 16:35:18.992459] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.623 [2024-11-16 16:35:18.992596] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.623 2024/11/16 16:35:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.623 [2024-11-16 16:35:19.008683] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.623 [2024-11-16 16:35:19.008824] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.623 2024/11/16 16:35:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.623 [2024-11-16 16:35:19.024989] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.623 [2024-11-16 16:35:19.025197] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.623 2024/11/16 16:35:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.623 [2024-11-16 16:35:19.041760] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.623 [2024-11-16 16:35:19.041899] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.623 2024/11/16 16:35:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.623 [2024-11-16 16:35:19.057643] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.624 [2024-11-16 16:35:19.057674] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.624 2024/11/16 16:35:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.624 [2024-11-16 16:35:19.074834] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.624 [2024-11-16 16:35:19.074876] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.624 2024/11/16 16:35:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.624 [2024-11-16 16:35:19.091054] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.624 [2024-11-16 16:35:19.091141] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.624 2024/11/16 16:35:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.624 [2024-11-16 16:35:19.108135] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.624 [2024-11-16 16:35:19.108188] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.624 2024/11/16 16:35:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.883 [2024-11-16 16:35:19.123387] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.883 [2024-11-16 16:35:19.123416] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.883 2024/11/16 16:35:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.883 [2024-11-16 16:35:19.139377] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.883 [2024-11-16 16:35:19.139418] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.883 2024/11/16 16:35:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.883 [2024-11-16 16:35:19.155845] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.883 [2024-11-16 16:35:19.155887] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.883 2024/11/16 16:35:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.883 [2024-11-16 16:35:19.172187] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.883 [2024-11-16 16:35:19.172227] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.883 2024/11/16 16:35:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.883 [2024-11-16 16:35:19.189167] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.883 [2024-11-16 16:35:19.189196] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.883 2024/11/16 16:35:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.883 [2024-11-16 16:35:19.205801] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.883 [2024-11-16 16:35:19.205842] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.883 2024/11/16 16:35:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.883 [2024-11-16 16:35:19.222578] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.883 [2024-11-16 16:35:19.222604] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.883 2024/11/16 16:35:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.883 [2024-11-16 16:35:19.237857] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.883 [2024-11-16 16:35:19.237882] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.883 2024/11/16 16:35:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.883 [2024-11-16 16:35:19.252219] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.883 [2024-11-16 16:35:19.252244] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.883 2024/11/16 16:35:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.883 [2024-11-16 16:35:19.262806] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.884 [2024-11-16 16:35:19.262832] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.884 2024/11/16 16:35:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.884 [2024-11-16 16:35:19.278439] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.884 [2024-11-16 16:35:19.278465] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.884 2024/11/16 16:35:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.884 [2024-11-16 16:35:19.294228] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.884 [2024-11-16 16:35:19.294270] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.884 2024/11/16 16:35:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.884 [2024-11-16 16:35:19.305903] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.884 [2024-11-16 16:35:19.305929] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.884 2024/11/16 16:35:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.884 [2024-11-16 16:35:19.321114] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.884 [2024-11-16 16:35:19.321140] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.884 2024/11/16 16:35:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.884 [2024-11-16 16:35:19.336975] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.884 [2024-11-16 16:35:19.337001] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.884 2024/11/16 16:35:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.884 [2024-11-16 16:35:19.352996] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.884 [2024-11-16 16:35:19.353022] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.884 2024/11/16 16:35:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:41.884 [2024-11-16 16:35:19.365132] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:41.884 [2024-11-16 16:35:19.365160] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:41.884 2024/11/16 16:35:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.143 [2024-11-16 16:35:19.378864] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.143 [2024-11-16 16:35:19.378890] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.143 2024/11/16 16:35:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.143 [2024-11-16 16:35:19.394540] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.143 [2024-11-16 16:35:19.394565] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.143 2024/11/16 16:35:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.143 [2024-11-16 16:35:19.410886] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.143 [2024-11-16 16:35:19.410911] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.143 2024/11/16 16:35:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.143 [2024-11-16 16:35:19.427114] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.143 [2024-11-16 16:35:19.427139] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.143 2024/11/16 16:35:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.143 [2024-11-16 16:35:19.444088] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.143 [2024-11-16 16:35:19.444114] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.143 2024/11/16 16:35:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.143 [2024-11-16 16:35:19.459952] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.143 [2024-11-16 16:35:19.459978] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.143 2024/11/16 16:35:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.143 [2024-11-16 16:35:19.471445] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.143 [2024-11-16 16:35:19.471502] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.143 2024/11/16 16:35:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.143 [2024-11-16 16:35:19.487639] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.143 [2024-11-16 16:35:19.487664] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.143 2024/11/16 16:35:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.143 [2024-11-16 16:35:19.503025] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.143 [2024-11-16 16:35:19.503050] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.143 2024/11/16 16:35:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.143 [2024-11-16 16:35:19.517409] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.143 [2024-11-16 16:35:19.517434] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.143 2024/11/16 16:35:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.143 [2024-11-16 16:35:19.531999] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.143 [2024-11-16 16:35:19.532024] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.143 2024/11/16 16:35:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.143 [2024-11-16 16:35:19.548728] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.143 [2024-11-16 16:35:19.548753] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.143 2024/11/16 16:35:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.143 [2024-11-16 16:35:19.564541] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.143 [2024-11-16 16:35:19.564566] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.143 2024/11/16 16:35:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.143 [2024-11-16 16:35:19.576158] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.143 [2024-11-16 16:35:19.576182] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.143 2024/11/16 16:35:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.143 [2024-11-16 16:35:19.592555] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.143 [2024-11-16 16:35:19.592581] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.143 2024/11/16 16:35:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.143 [2024-11-16 16:35:19.608173] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.143 [2024-11-16 16:35:19.608198] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.143 2024/11/16 16:35:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.143 [2024-11-16 16:35:19.620481] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.143 [2024-11-16 16:35:19.620507] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.143 2024/11/16 16:35:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.403 [2024-11-16 16:35:19.636033] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.403 [2024-11-16 16:35:19.636101] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.403 2024/11/16 16:35:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.403 [2024-11-16 16:35:19.652223] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.403 [2024-11-16 16:35:19.652248] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.403 2024/11/16 16:35:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.403 [2024-11-16 16:35:19.668032] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.403 [2024-11-16 16:35:19.668069] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.403 2024/11/16 16:35:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.403 [2024-11-16 16:35:19.682980] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.403 [2024-11-16 16:35:19.683005] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.403 2024/11/16 16:35:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.403 [2024-11-16 16:35:19.699103] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.403 [2024-11-16 16:35:19.699128] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.404 2024/11/16 16:35:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.404 [2024-11-16 16:35:19.710629] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.404 [2024-11-16 16:35:19.710654] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.404 2024/11/16 16:35:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.404 [2024-11-16 16:35:19.726584] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.404 [2024-11-16 16:35:19.726610] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.404 2024/11/16 16:35:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.404 [2024-11-16 16:35:19.742244] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.404 [2024-11-16 16:35:19.742273] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.404 2024/11/16 16:35:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.404 [2024-11-16 16:35:19.753835] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.404 [2024-11-16 16:35:19.753982] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.404 2024/11/16 16:35:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.404 [2024-11-16 16:35:19.769777] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.404 [2024-11-16 16:35:19.769917] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.404 2024/11/16 16:35:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.404 [2024-11-16 16:35:19.785601] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.404 [2024-11-16 16:35:19.785741] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.404 2024/11/16 16:35:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.404 [2024-11-16 16:35:19.802360] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.404 [2024-11-16 16:35:19.802390] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.404 2024/11/16 16:35:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.404 [2024-11-16 16:35:19.817882] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.404 [2024-11-16 16:35:19.817912] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.404 2024/11/16 16:35:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.404 [2024-11-16 16:35:19.831859] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.404 [2024-11-16 16:35:19.831889] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.404 2024/11/16 16:35:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.404 [2024-11-16 16:35:19.846748] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.404 [2024-11-16 16:35:19.846779] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.404 2024/11/16 16:35:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.404 [2024-11-16 16:35:19.864300] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.404 [2024-11-16 16:35:19.864333] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.404 2024/11/16 16:35:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.404 [2024-11-16 16:35:19.880598] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.404 [2024-11-16 16:35:19.880631] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.404 2024/11/16 16:35:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.663 [2024-11-16 16:35:19.897877] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.663 [2024-11-16 16:35:19.897907] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.663 2024/11/16 16:35:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.663 [2024-11-16 16:35:19.912642] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.663 [2024-11-16 16:35:19.912671] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.663 2024/11/16 16:35:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.663 [2024-11-16 16:35:19.927296] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.663 [2024-11-16 16:35:19.927326] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.663 2024/11/16 16:35:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.663 [2024-11-16 16:35:19.939142] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.663 [2024-11-16 16:35:19.939172] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.663 2024/11/16 16:35:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.663 [2024-11-16 16:35:19.954542] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.663 [2024-11-16 16:35:19.954684] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.663 2024/11/16 16:35:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.663 [2024-11-16 16:35:19.970919] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.663 [2024-11-16 16:35:19.970951] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.663 2024/11/16 16:35:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.663 [2024-11-16 16:35:19.986885] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.663 [2024-11-16 16:35:19.986917] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.663 2024/11/16 16:35:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.663 [2024-11-16 16:35:20.003428] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.663 [2024-11-16 16:35:20.003460] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.663 2024/11/16 16:35:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.663 [2024-11-16 16:35:20.019801] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.663 [2024-11-16 16:35:20.019832] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.663 2024/11/16 16:35:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.663 [2024-11-16 16:35:20.036179] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.663 [2024-11-16 16:35:20.036210] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.663 2024/11/16 16:35:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.663 [2024-11-16 16:35:20.051677] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.663 [2024-11-16 16:35:20.051708] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.663 2024/11/16 16:35:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.663 [2024-11-16 16:35:20.068886] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.663 [2024-11-16 16:35:20.068917] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.663 2024/11/16 16:35:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.663 [2024-11-16 16:35:20.085120] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.663 [2024-11-16 16:35:20.085151] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.663 2024/11/16 16:35:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.663 [2024-11-16 16:35:20.101206] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.663 [2024-11-16 16:35:20.101238] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.663 2024/11/16 16:35:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.663 [2024-11-16 16:35:20.112123] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.663 [2024-11-16 16:35:20.112150] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.663 2024/11/16 16:35:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.663 [2024-11-16 16:35:20.128373] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.663 [2024-11-16 16:35:20.128403] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.663 2024/11/16 16:35:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.663 [2024-11-16 16:35:20.144610] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.663 [2024-11-16 16:35:20.144640] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.663 2024/11/16 16:35:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.923 [2024-11-16 16:35:20.161141] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.923 [2024-11-16 16:35:20.161172] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.923 2024/11/16 16:35:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.923 [2024-11-16 16:35:20.177775] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.923 [2024-11-16 16:35:20.177805] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.923 2024/11/16 16:35:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.923 [2024-11-16 16:35:20.193966] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.923 [2024-11-16 16:35:20.193997] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.923 2024/11/16 16:35:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.923 [2024-11-16 16:35:20.209726] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.923 [2024-11-16 16:35:20.209755] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.923 2024/11/16 16:35:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.923 [2024-11-16 16:35:20.224690] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.923 [2024-11-16 16:35:20.224718] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.923 2024/11/16 16:35:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.923 [2024-11-16 16:35:20.241087] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.923 [2024-11-16 16:35:20.241133] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.923 2024/11/16 16:35:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.923 [2024-11-16 16:35:20.258256] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.923 [2024-11-16 16:35:20.258287] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.923 2024/11/16 16:35:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.923 [2024-11-16 16:35:20.274751] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.923 [2024-11-16 16:35:20.274781] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.923 2024/11/16 16:35:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.923 [2024-11-16 16:35:20.290893] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.923 [2024-11-16 16:35:20.290923] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.923 2024/11/16 16:35:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.923 [2024-11-16 16:35:20.307390] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.923 [2024-11-16 16:35:20.307420] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.923 2024/11/16 16:35:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.923 [2024-11-16 16:35:20.323606] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.923 [2024-11-16 16:35:20.323636] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.923 2024/11/16 16:35:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.923 [2024-11-16 16:35:20.335841] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.923 [2024-11-16 16:35:20.335872] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.923 2024/11/16 16:35:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.923 [2024-11-16 16:35:20.347610] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.923 [2024-11-16 16:35:20.347640] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.923 2024/11/16 16:35:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.923 [2024-11-16 16:35:20.363354] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.923 [2024-11-16 16:35:20.363388] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.923 2024/11/16 16:35:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.923 [2024-11-16 16:35:20.380225] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.923 [2024-11-16 16:35:20.380253] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.923 2024/11/16 16:35:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:42.923 [2024-11-16 16:35:20.396197] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:42.923 [2024-11-16 16:35:20.396227] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:42.923 2024/11/16 16:35:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.183 [2024-11-16 16:35:20.413524] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.183 [2024-11-16 16:35:20.413553] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.183 2024/11/16 16:35:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.183 [2024-11-16 16:35:20.429784] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.183 [2024-11-16 16:35:20.429814] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.184 2024/11/16 16:35:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.184 [2024-11-16 16:35:20.445488] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.184 [2024-11-16 16:35:20.445517] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.184 2024/11/16 16:35:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.184 [2024-11-16 16:35:20.460374] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.184 [2024-11-16 16:35:20.460404] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.184 2024/11/16 16:35:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.184 [2024-11-16 16:35:20.476223] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.184 [2024-11-16 16:35:20.476252] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.184 2024/11/16 16:35:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.184 [2024-11-16 16:35:20.490895] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.184 [2024-11-16 16:35:20.491042] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.184 2024/11/16 16:35:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.184 [2024-11-16 16:35:20.507046] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.184 [2024-11-16 16:35:20.507087] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.184 2024/11/16 16:35:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.184 [2024-11-16 16:35:20.519074] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.184 [2024-11-16 16:35:20.519132] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.184 2024/11/16 16:35:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.184 [2024-11-16 16:35:20.534257] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.184 [2024-11-16 16:35:20.534287] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.184 2024/11/16 16:35:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.184 [2024-11-16 16:35:20.550632] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.184 [2024-11-16 16:35:20.550662] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.184 2024/11/16 16:35:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.184 [2024-11-16 16:35:20.566569] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.184 [2024-11-16 16:35:20.566599] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.184 2024/11/16 16:35:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.184 [2024-11-16 16:35:20.580790] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.184 [2024-11-16 16:35:20.580821] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.184 2024/11/16 16:35:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.184 [2024-11-16 16:35:20.596277] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.184 [2024-11-16 16:35:20.596308] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.184 2024/11/16 16:35:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.184 [2024-11-16 16:35:20.612302] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.184 [2024-11-16 16:35:20.612334] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.184 2024/11/16 16:35:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.184 [2024-11-16 16:35:20.628558] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.184 [2024-11-16 16:35:20.628588] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.184 2024/11/16 16:35:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.184 [2024-11-16 16:35:20.639838] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.184 [2024-11-16 16:35:20.639869] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.184 2024/11/16 16:35:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.184 [2024-11-16 16:35:20.654969] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.184 [2024-11-16 16:35:20.654999] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.184 2024/11/16 16:35:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.184 [2024-11-16 16:35:20.671465] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.184 [2024-11-16 16:35:20.671495] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.444 2024/11/16 16:35:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.444 [2024-11-16 16:35:20.688143] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.444 [2024-11-16 16:35:20.688172] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.444 2024/11/16 16:35:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.444 [2024-11-16 16:35:20.704449] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.444 [2024-11-16 16:35:20.704479] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.444 2024/11/16 16:35:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.444 [2024-11-16 16:35:20.720338] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.444 [2024-11-16 16:35:20.720369] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.444 2024/11/16 16:35:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.444 [2024-11-16 16:35:20.734563] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.444 [2024-11-16 16:35:20.734593] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.444 2024/11/16 16:35:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.445 [2024-11-16 16:35:20.749177] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.445 [2024-11-16 16:35:20.749207] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.445 2024/11/16 16:35:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.445 [2024-11-16 16:35:20.761181] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.445 [2024-11-16 16:35:20.761216] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.445 2024/11/16 16:35:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.445 [2024-11-16 16:35:20.777436] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.445 [2024-11-16 16:35:20.777467] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.445 2024/11/16 16:35:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.445 [2024-11-16 16:35:20.793613] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.445 [2024-11-16 16:35:20.793643] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.445 2024/11/16 16:35:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.445 [2024-11-16 16:35:20.809969] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.445 [2024-11-16 16:35:20.809999] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.445 2024/11/16 16:35:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.445 [2024-11-16 16:35:20.826088] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.445 [2024-11-16 16:35:20.826129] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.445 2024/11/16 16:35:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.445 [2024-11-16 16:35:20.842011] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.445 [2024-11-16 16:35:20.842042] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.445 2024/11/16 16:35:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.445 [2024-11-16 16:35:20.857026] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.445 [2024-11-16 16:35:20.857216] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.445 2024/11/16 16:35:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.445 [2024-11-16 16:35:20.873546] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.445 [2024-11-16 16:35:20.873576] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.445 2024/11/16 16:35:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.445 [2024-11-16 16:35:20.885655] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.445 [2024-11-16 16:35:20.885685] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.445 2024/11/16 16:35:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.445 [2024-11-16 16:35:20.901635] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.445 [2024-11-16 16:35:20.901665] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.445 2024/11/16 16:35:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.445 [2024-11-16 16:35:20.918249] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.445 [2024-11-16 16:35:20.918293] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.445 2024/11/16 16:35:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.705 [2024-11-16 16:35:20.935518] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.705 [2024-11-16 16:35:20.935548] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.705 2024/11/16 16:35:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.705 [2024-11-16 16:35:20.951409] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.705 [2024-11-16 16:35:20.951440] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.705 2024/11/16 16:35:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.705 [2024-11-16 16:35:20.967619] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.705 [2024-11-16 16:35:20.967649] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.705 2024/11/16 16:35:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.705 [2024-11-16 16:35:20.983739] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.705 [2024-11-16 16:35:20.983769] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.705 2024/11/16 16:35:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.705 [2024-11-16 16:35:20.999502] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.705 [2024-11-16 16:35:20.999532] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.705 2024/11/16 16:35:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.705 [2024-11-16 16:35:21.013662] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.705 [2024-11-16 16:35:21.013692] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.705 2024/11/16 16:35:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.705 [2024-11-16 16:35:21.027936] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.705 [2024-11-16 16:35:21.027967] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.705 2024/11/16 16:35:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.705 00:15:43.705 Latency(us) 00:15:43.705 [2024-11-16T16:35:21.196Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:43.705 [2024-11-16T16:35:21.196Z] Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:15:43.705 Nvme1n1 : 5.01 14381.51 112.36 0.00 0.00 8889.89 3753.43 17396.83 00:15:43.705 [2024-11-16T16:35:21.196Z] =================================================================================================================== 00:15:43.705 [2024-11-16T16:35:21.196Z] Total : 14381.51 112.36 0.00 0.00 8889.89 3753.43 17396.83 00:15:43.705 [2024-11-16 16:35:21.036969] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.705 [2024-11-16 16:35:21.036996] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.705 2024/11/16 16:35:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.705 [2024-11-16 16:35:21.048947] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.705 [2024-11-16 16:35:21.048970] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.705 2024/11/16 16:35:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.705 [2024-11-16 16:35:21.060927] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.705 [2024-11-16 16:35:21.060964] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.705 2024/11/16 16:35:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.705 [2024-11-16 16:35:21.072929] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.705 [2024-11-16 16:35:21.072971] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.705 2024/11/16 16:35:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.705 [2024-11-16 16:35:21.084951] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.705 [2024-11-16 16:35:21.084974] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.705 2024/11/16 16:35:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.705 [2024-11-16 16:35:21.096951] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.705 [2024-11-16 16:35:21.096987] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.705 2024/11/16 16:35:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.705 [2024-11-16 16:35:21.108961] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.706 [2024-11-16 16:35:21.108986] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.706 2024/11/16 16:35:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.706 [2024-11-16 16:35:21.120975] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.706 [2024-11-16 16:35:21.120995] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.706 2024/11/16 16:35:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.706 [2024-11-16 16:35:21.132977] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.706 [2024-11-16 16:35:21.132998] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.706 2024/11/16 16:35:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.706 [2024-11-16 16:35:21.144965] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.706 [2024-11-16 16:35:21.144985] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.706 2024/11/16 16:35:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.706 [2024-11-16 16:35:21.156983] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.706 [2024-11-16 16:35:21.157003] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.706 2024/11/16 16:35:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.706 [2024-11-16 16:35:21.168971] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.706 [2024-11-16 16:35:21.168991] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.706 2024/11/16 16:35:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.706 [2024-11-16 16:35:21.180972] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.706 [2024-11-16 16:35:21.180992] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.706 2024/11/16 16:35:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.706 [2024-11-16 16:35:21.192992] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.706 [2024-11-16 16:35:21.193014] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.966 2024/11/16 16:35:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.966 [2024-11-16 16:35:21.204977] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.966 [2024-11-16 16:35:21.204999] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.966 2024/11/16 16:35:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.966 [2024-11-16 16:35:21.216982] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.966 [2024-11-16 16:35:21.217003] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.966 2024/11/16 16:35:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.966 [2024-11-16 16:35:21.228999] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.966 [2024-11-16 16:35:21.229023] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.966 2024/11/16 16:35:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.966 [2024-11-16 16:35:21.240989] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.966 [2024-11-16 16:35:21.241014] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.966 2024/11/16 16:35:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.966 [2024-11-16 16:35:21.252989] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.966 [2024-11-16 16:35:21.253013] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.966 2024/11/16 16:35:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.966 [2024-11-16 16:35:21.264991] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.966 [2024-11-16 16:35:21.265010] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.966 2024/11/16 16:35:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.966 [2024-11-16 16:35:21.277020] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:43.966 [2024-11-16 16:35:21.277042] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.966 2024/11/16 16:35:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:43.966 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (86484) - No such process 00:15:43.966 16:35:21 -- target/zcopy.sh@49 -- # wait 86484 00:15:43.966 16:35:21 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:43.966 16:35:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.966 16:35:21 -- common/autotest_common.sh@10 -- # set +x 00:15:43.966 16:35:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.966 16:35:21 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:43.966 16:35:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.966 16:35:21 -- common/autotest_common.sh@10 -- # set +x 00:15:43.966 delay0 00:15:43.966 16:35:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.966 16:35:21 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:15:43.966 16:35:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.966 16:35:21 -- common/autotest_common.sh@10 -- # set +x 00:15:43.966 16:35:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.966 16:35:21 -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:15:44.225 [2024-11-16 16:35:21.474095] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:15:50.795 Initializing NVMe Controllers 00:15:50.795 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:50.795 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:50.795 Initialization complete. Launching workers. 00:15:50.795 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 81 00:15:50.795 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 368, failed to submit 33 00:15:50.795 success 182, unsuccess 186, failed 0 00:15:50.795 16:35:27 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:15:50.795 16:35:27 -- target/zcopy.sh@60 -- # nvmftestfini 00:15:50.795 16:35:27 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:50.795 16:35:27 -- nvmf/common.sh@116 -- # sync 00:15:50.795 16:35:27 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:50.795 16:35:27 -- nvmf/common.sh@119 -- # set +e 00:15:50.795 16:35:27 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:50.795 16:35:27 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:50.795 rmmod nvme_tcp 00:15:50.795 rmmod nvme_fabrics 00:15:50.795 rmmod nvme_keyring 00:15:50.795 16:35:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:50.795 16:35:27 -- nvmf/common.sh@123 -- # set -e 00:15:50.795 16:35:27 -- nvmf/common.sh@124 -- # return 0 00:15:50.795 16:35:27 -- nvmf/common.sh@477 -- # '[' -n 86321 ']' 00:15:50.795 16:35:27 -- nvmf/common.sh@478 -- # killprocess 86321 00:15:50.795 16:35:27 -- common/autotest_common.sh@936 -- # '[' -z 86321 ']' 00:15:50.795 16:35:27 -- common/autotest_common.sh@940 -- # kill -0 86321 00:15:50.795 16:35:27 -- common/autotest_common.sh@941 -- # uname 00:15:50.795 16:35:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:50.795 16:35:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86321 00:15:50.795 16:35:27 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:50.795 16:35:27 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:50.795 killing process with pid 86321 00:15:50.795 16:35:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86321' 00:15:50.795 16:35:27 -- common/autotest_common.sh@955 -- # kill 86321 00:15:50.795 16:35:27 -- common/autotest_common.sh@960 -- # wait 86321 00:15:50.795 16:35:27 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:50.795 16:35:27 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:50.795 16:35:27 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:50.795 16:35:27 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:50.795 16:35:27 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:50.795 16:35:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:50.795 16:35:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:50.795 16:35:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:50.795 16:35:27 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:50.795 00:15:50.795 real 0m24.793s 00:15:50.795 user 0m39.290s 00:15:50.795 sys 0m7.248s 00:15:50.795 16:35:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:50.795 16:35:27 -- common/autotest_common.sh@10 -- # set +x 00:15:50.795 ************************************ 00:15:50.795 END TEST nvmf_zcopy 00:15:50.795 ************************************ 00:15:50.795 16:35:27 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:50.795 16:35:27 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:50.795 16:35:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:50.795 16:35:27 -- common/autotest_common.sh@10 -- # set +x 00:15:50.795 ************************************ 00:15:50.795 START TEST nvmf_nmic 00:15:50.795 ************************************ 00:15:50.795 16:35:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:50.795 * Looking for test storage... 00:15:50.795 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:50.795 16:35:28 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:50.795 16:35:28 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:50.795 16:35:28 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:50.795 16:35:28 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:50.795 16:35:28 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:50.795 16:35:28 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:50.795 16:35:28 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:50.795 16:35:28 -- scripts/common.sh@335 -- # IFS=.-: 00:15:50.795 16:35:28 -- scripts/common.sh@335 -- # read -ra ver1 00:15:50.795 16:35:28 -- scripts/common.sh@336 -- # IFS=.-: 00:15:50.795 16:35:28 -- scripts/common.sh@336 -- # read -ra ver2 00:15:50.795 16:35:28 -- scripts/common.sh@337 -- # local 'op=<' 00:15:50.795 16:35:28 -- scripts/common.sh@339 -- # ver1_l=2 00:15:50.795 16:35:28 -- scripts/common.sh@340 -- # ver2_l=1 00:15:50.795 16:35:28 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:50.795 16:35:28 -- scripts/common.sh@343 -- # case "$op" in 00:15:50.795 16:35:28 -- scripts/common.sh@344 -- # : 1 00:15:50.795 16:35:28 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:50.795 16:35:28 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:50.795 16:35:28 -- scripts/common.sh@364 -- # decimal 1 00:15:50.795 16:35:28 -- scripts/common.sh@352 -- # local d=1 00:15:50.795 16:35:28 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:50.795 16:35:28 -- scripts/common.sh@354 -- # echo 1 00:15:50.795 16:35:28 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:50.795 16:35:28 -- scripts/common.sh@365 -- # decimal 2 00:15:50.795 16:35:28 -- scripts/common.sh@352 -- # local d=2 00:15:50.795 16:35:28 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:50.795 16:35:28 -- scripts/common.sh@354 -- # echo 2 00:15:50.795 16:35:28 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:50.795 16:35:28 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:50.795 16:35:28 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:50.795 16:35:28 -- scripts/common.sh@367 -- # return 0 00:15:50.795 16:35:28 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:50.795 16:35:28 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:50.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:50.795 --rc genhtml_branch_coverage=1 00:15:50.795 --rc genhtml_function_coverage=1 00:15:50.795 --rc genhtml_legend=1 00:15:50.795 --rc geninfo_all_blocks=1 00:15:50.795 --rc geninfo_unexecuted_blocks=1 00:15:50.795 00:15:50.795 ' 00:15:50.795 16:35:28 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:50.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:50.795 --rc genhtml_branch_coverage=1 00:15:50.795 --rc genhtml_function_coverage=1 00:15:50.795 --rc genhtml_legend=1 00:15:50.795 --rc geninfo_all_blocks=1 00:15:50.795 --rc geninfo_unexecuted_blocks=1 00:15:50.795 00:15:50.795 ' 00:15:50.795 16:35:28 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:50.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:50.796 --rc genhtml_branch_coverage=1 00:15:50.796 --rc genhtml_function_coverage=1 00:15:50.796 --rc genhtml_legend=1 00:15:50.796 --rc geninfo_all_blocks=1 00:15:50.796 --rc geninfo_unexecuted_blocks=1 00:15:50.796 00:15:50.796 ' 00:15:50.796 16:35:28 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:50.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:50.796 --rc genhtml_branch_coverage=1 00:15:50.796 --rc genhtml_function_coverage=1 00:15:50.796 --rc genhtml_legend=1 00:15:50.796 --rc geninfo_all_blocks=1 00:15:50.796 --rc geninfo_unexecuted_blocks=1 00:15:50.796 00:15:50.796 ' 00:15:50.796 16:35:28 -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:50.796 16:35:28 -- nvmf/common.sh@7 -- # uname -s 00:15:50.796 16:35:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:50.796 16:35:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:50.796 16:35:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:50.796 16:35:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:50.796 16:35:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:50.796 16:35:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:50.796 16:35:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:50.796 16:35:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:50.796 16:35:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:50.796 16:35:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:50.796 16:35:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:15:50.796 16:35:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:15:50.796 16:35:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:50.796 16:35:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:50.796 16:35:28 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:50.796 16:35:28 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:50.796 16:35:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:50.796 16:35:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:50.796 16:35:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:50.796 16:35:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.796 16:35:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.796 16:35:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.796 16:35:28 -- paths/export.sh@5 -- # export PATH 00:15:50.796 16:35:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.796 16:35:28 -- nvmf/common.sh@46 -- # : 0 00:15:50.796 16:35:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:50.796 16:35:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:50.796 16:35:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:50.796 16:35:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:50.796 16:35:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:50.796 16:35:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:50.796 16:35:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:50.796 16:35:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:50.796 16:35:28 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:50.796 16:35:28 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:50.796 16:35:28 -- target/nmic.sh@14 -- # nvmftestinit 00:15:50.796 16:35:28 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:50.796 16:35:28 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:50.796 16:35:28 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:50.796 16:35:28 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:50.796 16:35:28 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:50.796 16:35:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:50.796 16:35:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:50.796 16:35:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:50.796 16:35:28 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:50.796 16:35:28 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:50.796 16:35:28 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:50.796 16:35:28 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:50.796 16:35:28 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:50.796 16:35:28 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:50.796 16:35:28 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:50.796 16:35:28 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:50.796 16:35:28 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:50.796 16:35:28 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:50.796 16:35:28 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:50.796 16:35:28 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:50.796 16:35:28 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:50.796 16:35:28 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:50.796 16:35:28 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:50.796 16:35:28 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:50.796 16:35:28 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:50.796 16:35:28 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:50.796 16:35:28 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:50.796 16:35:28 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:50.796 Cannot find device "nvmf_tgt_br" 00:15:50.796 16:35:28 -- nvmf/common.sh@154 -- # true 00:15:50.796 16:35:28 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:50.796 Cannot find device "nvmf_tgt_br2" 00:15:50.796 16:35:28 -- nvmf/common.sh@155 -- # true 00:15:50.796 16:35:28 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:50.796 16:35:28 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:50.796 Cannot find device "nvmf_tgt_br" 00:15:50.796 16:35:28 -- nvmf/common.sh@157 -- # true 00:15:50.796 16:35:28 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:50.796 Cannot find device "nvmf_tgt_br2" 00:15:50.796 16:35:28 -- nvmf/common.sh@158 -- # true 00:15:50.796 16:35:28 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:50.796 16:35:28 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:51.056 16:35:28 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:51.056 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:51.056 16:35:28 -- nvmf/common.sh@161 -- # true 00:15:51.056 16:35:28 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:51.056 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:51.056 16:35:28 -- nvmf/common.sh@162 -- # true 00:15:51.056 16:35:28 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:51.056 16:35:28 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:51.056 16:35:28 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:51.056 16:35:28 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:51.056 16:35:28 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:51.056 16:35:28 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:51.056 16:35:28 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:51.056 16:35:28 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:51.056 16:35:28 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:51.056 16:35:28 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:51.056 16:35:28 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:51.056 16:35:28 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:51.056 16:35:28 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:51.056 16:35:28 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:51.056 16:35:28 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:51.056 16:35:28 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:51.056 16:35:28 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:51.056 16:35:28 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:51.056 16:35:28 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:51.056 16:35:28 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:51.056 16:35:28 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:51.056 16:35:28 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:51.056 16:35:28 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:51.056 16:35:28 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:51.056 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:51.056 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:15:51.056 00:15:51.056 --- 10.0.0.2 ping statistics --- 00:15:51.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.056 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:15:51.056 16:35:28 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:51.056 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:51.056 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:15:51.056 00:15:51.056 --- 10.0.0.3 ping statistics --- 00:15:51.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.056 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:15:51.056 16:35:28 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:51.056 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:51.056 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:15:51.056 00:15:51.056 --- 10.0.0.1 ping statistics --- 00:15:51.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.056 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:15:51.056 16:35:28 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:51.056 16:35:28 -- nvmf/common.sh@421 -- # return 0 00:15:51.056 16:35:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:51.056 16:35:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:51.056 16:35:28 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:51.056 16:35:28 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:51.056 16:35:28 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:51.056 16:35:28 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:51.056 16:35:28 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:51.056 16:35:28 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:15:51.056 16:35:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:51.056 16:35:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:51.056 16:35:28 -- common/autotest_common.sh@10 -- # set +x 00:15:51.316 16:35:28 -- nvmf/common.sh@469 -- # nvmfpid=86814 00:15:51.316 16:35:28 -- nvmf/common.sh@470 -- # waitforlisten 86814 00:15:51.316 16:35:28 -- common/autotest_common.sh@829 -- # '[' -z 86814 ']' 00:15:51.316 16:35:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:51.316 16:35:28 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:51.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:51.316 16:35:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:51.316 16:35:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:51.316 16:35:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:51.316 16:35:28 -- common/autotest_common.sh@10 -- # set +x 00:15:51.316 [2024-11-16 16:35:28.603269] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:51.316 [2024-11-16 16:35:28.603363] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:51.316 [2024-11-16 16:35:28.742758] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:51.575 [2024-11-16 16:35:28.812377] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:51.575 [2024-11-16 16:35:28.812533] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:51.575 [2024-11-16 16:35:28.812546] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:51.575 [2024-11-16 16:35:28.812554] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:51.575 [2024-11-16 16:35:28.812983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:51.575 [2024-11-16 16:35:28.813602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:51.575 [2024-11-16 16:35:28.813797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:51.575 [2024-11-16 16:35:28.814027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:52.155 16:35:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:52.155 16:35:29 -- common/autotest_common.sh@862 -- # return 0 00:15:52.155 16:35:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:52.155 16:35:29 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:52.155 16:35:29 -- common/autotest_common.sh@10 -- # set +x 00:15:52.155 16:35:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:52.155 16:35:29 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:52.155 16:35:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.155 16:35:29 -- common/autotest_common.sh@10 -- # set +x 00:15:52.155 [2024-11-16 16:35:29.577698] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:52.155 16:35:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.155 16:35:29 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:52.155 16:35:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.155 16:35:29 -- common/autotest_common.sh@10 -- # set +x 00:15:52.155 Malloc0 00:15:52.155 16:35:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.155 16:35:29 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:52.155 16:35:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.155 16:35:29 -- common/autotest_common.sh@10 -- # set +x 00:15:52.155 16:35:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.155 16:35:29 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:52.155 16:35:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.155 16:35:29 -- common/autotest_common.sh@10 -- # set +x 00:15:52.155 16:35:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.155 16:35:29 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:52.155 16:35:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.155 16:35:29 -- common/autotest_common.sh@10 -- # set +x 00:15:52.414 [2024-11-16 16:35:29.650273] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:52.414 16:35:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.414 16:35:29 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:15:52.414 test case1: single bdev can't be used in multiple subsystems 00:15:52.414 16:35:29 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:15:52.414 16:35:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.414 16:35:29 -- common/autotest_common.sh@10 -- # set +x 00:15:52.414 16:35:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.414 16:35:29 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:15:52.414 16:35:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.414 16:35:29 -- common/autotest_common.sh@10 -- # set +x 00:15:52.414 16:35:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.414 16:35:29 -- target/nmic.sh@28 -- # nmic_status=0 00:15:52.414 16:35:29 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:15:52.414 16:35:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.414 16:35:29 -- common/autotest_common.sh@10 -- # set +x 00:15:52.414 [2024-11-16 16:35:29.674041] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:15:52.414 [2024-11-16 16:35:29.674089] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:15:52.414 [2024-11-16 16:35:29.674100] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.414 2024/11/16 16:35:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.414 request: 00:15:52.414 { 00:15:52.414 "method": "nvmf_subsystem_add_ns", 00:15:52.414 "params": { 00:15:52.414 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:15:52.414 "namespace": { 00:15:52.414 "bdev_name": "Malloc0" 00:15:52.414 } 00:15:52.414 } 00:15:52.414 } 00:15:52.414 Got JSON-RPC error response 00:15:52.414 GoRPCClient: error on JSON-RPC call 00:15:52.414 16:35:29 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:52.414 16:35:29 -- target/nmic.sh@29 -- # nmic_status=1 00:15:52.414 16:35:29 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:15:52.414 16:35:29 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:15:52.414 Adding namespace failed - expected result. 00:15:52.414 16:35:29 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:15:52.414 test case2: host connect to nvmf target in multiple paths 00:15:52.414 16:35:29 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:52.414 16:35:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.414 16:35:29 -- common/autotest_common.sh@10 -- # set +x 00:15:52.414 [2024-11-16 16:35:29.686158] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:52.414 16:35:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.414 16:35:29 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 --hostid=dcaf3c85-349e-474a-91c8-b5dfcb47b007 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:52.414 16:35:29 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 --hostid=dcaf3c85-349e-474a-91c8-b5dfcb47b007 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:15:52.672 16:35:30 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:15:52.672 16:35:30 -- common/autotest_common.sh@1187 -- # local i=0 00:15:52.672 16:35:30 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:15:52.672 16:35:30 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:15:52.672 16:35:30 -- common/autotest_common.sh@1194 -- # sleep 2 00:15:54.574 16:35:32 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:15:54.574 16:35:32 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:15:54.574 16:35:32 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:15:54.574 16:35:32 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:15:54.574 16:35:32 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:15:54.574 16:35:32 -- common/autotest_common.sh@1197 -- # return 0 00:15:54.574 16:35:32 -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:54.833 [global] 00:15:54.833 thread=1 00:15:54.833 invalidate=1 00:15:54.833 rw=write 00:15:54.833 time_based=1 00:15:54.833 runtime=1 00:15:54.833 ioengine=libaio 00:15:54.833 direct=1 00:15:54.833 bs=4096 00:15:54.833 iodepth=1 00:15:54.833 norandommap=0 00:15:54.833 numjobs=1 00:15:54.833 00:15:54.833 verify_dump=1 00:15:54.833 verify_backlog=512 00:15:54.833 verify_state_save=0 00:15:54.833 do_verify=1 00:15:54.833 verify=crc32c-intel 00:15:54.833 [job0] 00:15:54.833 filename=/dev/nvme0n1 00:15:54.833 Could not set queue depth (nvme0n1) 00:15:54.833 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:54.833 fio-3.35 00:15:54.833 Starting 1 thread 00:15:56.211 00:15:56.211 job0: (groupid=0, jobs=1): err= 0: pid=86924: Sat Nov 16 16:35:33 2024 00:15:56.211 read: IOPS=3242, BW=12.7MiB/s (13.3MB/s)(12.7MiB/1001msec) 00:15:56.211 slat (nsec): min=12392, max=65058, avg=14887.05, stdev=4161.36 00:15:56.211 clat (usec): min=109, max=449, avg=147.55, stdev=22.05 00:15:56.211 lat (usec): min=122, max=472, avg=162.44, stdev=23.02 00:15:56.211 clat percentiles (usec): 00:15:56.211 | 1.00th=[ 121], 5.00th=[ 125], 10.00th=[ 128], 20.00th=[ 133], 00:15:56.211 | 30.00th=[ 135], 40.00th=[ 139], 50.00th=[ 143], 60.00th=[ 147], 00:15:56.211 | 70.00th=[ 153], 80.00th=[ 163], 90.00th=[ 176], 95.00th=[ 186], 00:15:56.211 | 99.00th=[ 210], 99.50th=[ 223], 99.90th=[ 355], 99.95th=[ 404], 00:15:56.211 | 99.99th=[ 449] 00:15:56.211 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:15:56.211 slat (usec): min=16, max=117, avg=23.79, stdev= 6.92 00:15:56.211 clat (usec): min=80, max=343, avg=105.34, stdev=18.42 00:15:56.211 lat (usec): min=100, max=365, avg=129.13, stdev=20.78 00:15:56.211 clat percentiles (usec): 00:15:56.211 | 1.00th=[ 84], 5.00th=[ 88], 10.00th=[ 90], 20.00th=[ 93], 00:15:56.211 | 30.00th=[ 95], 40.00th=[ 98], 50.00th=[ 101], 60.00th=[ 104], 00:15:56.211 | 70.00th=[ 109], 80.00th=[ 117], 90.00th=[ 128], 95.00th=[ 141], 00:15:56.211 | 99.00th=[ 163], 99.50th=[ 176], 99.90th=[ 277], 99.95th=[ 289], 00:15:56.211 | 99.99th=[ 343] 00:15:56.211 bw ( KiB/s): min=14904, max=14904, per=100.00%, avg=14904.00, stdev= 0.00, samples=1 00:15:56.211 iops : min= 3726, max= 3726, avg=3726.00, stdev= 0.00, samples=1 00:15:56.211 lat (usec) : 100=25.40%, 250=74.33%, 500=0.26% 00:15:56.211 cpu : usr=2.20%, sys=9.40%, ctx=6830, majf=0, minf=5 00:15:56.211 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:56.211 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:56.211 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:56.211 issued rwts: total=3246,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:56.211 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:56.211 00:15:56.211 Run status group 0 (all jobs): 00:15:56.211 READ: bw=12.7MiB/s (13.3MB/s), 12.7MiB/s-12.7MiB/s (13.3MB/s-13.3MB/s), io=12.7MiB (13.3MB), run=1001-1001msec 00:15:56.211 WRITE: bw=14.0MiB/s (14.7MB/s), 14.0MiB/s-14.0MiB/s (14.7MB/s-14.7MB/s), io=14.0MiB (14.7MB), run=1001-1001msec 00:15:56.211 00:15:56.211 Disk stats (read/write): 00:15:56.211 nvme0n1: ios=3069/3072, merge=0/0, ticks=485/344, in_queue=829, util=91.18% 00:15:56.211 16:35:33 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:56.211 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:15:56.211 16:35:33 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:56.211 16:35:33 -- common/autotest_common.sh@1208 -- # local i=0 00:15:56.211 16:35:33 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:15:56.211 16:35:33 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:56.211 16:35:33 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:56.211 16:35:33 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:15:56.211 16:35:33 -- common/autotest_common.sh@1220 -- # return 0 00:15:56.211 16:35:33 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:15:56.211 16:35:33 -- target/nmic.sh@53 -- # nvmftestfini 00:15:56.211 16:35:33 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:56.211 16:35:33 -- nvmf/common.sh@116 -- # sync 00:15:56.211 16:35:33 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:56.211 16:35:33 -- nvmf/common.sh@119 -- # set +e 00:15:56.211 16:35:33 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:56.211 16:35:33 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:56.211 rmmod nvme_tcp 00:15:56.211 rmmod nvme_fabrics 00:15:56.211 rmmod nvme_keyring 00:15:56.212 16:35:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:56.212 16:35:33 -- nvmf/common.sh@123 -- # set -e 00:15:56.212 16:35:33 -- nvmf/common.sh@124 -- # return 0 00:15:56.212 16:35:33 -- nvmf/common.sh@477 -- # '[' -n 86814 ']' 00:15:56.212 16:35:33 -- nvmf/common.sh@478 -- # killprocess 86814 00:15:56.212 16:35:33 -- common/autotest_common.sh@936 -- # '[' -z 86814 ']' 00:15:56.212 16:35:33 -- common/autotest_common.sh@940 -- # kill -0 86814 00:15:56.212 16:35:33 -- common/autotest_common.sh@941 -- # uname 00:15:56.212 16:35:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:56.212 16:35:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86814 00:15:56.212 16:35:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:56.212 16:35:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:56.212 killing process with pid 86814 00:15:56.212 16:35:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86814' 00:15:56.212 16:35:33 -- common/autotest_common.sh@955 -- # kill 86814 00:15:56.212 16:35:33 -- common/autotest_common.sh@960 -- # wait 86814 00:15:56.470 16:35:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:56.470 16:35:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:56.470 16:35:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:56.470 16:35:33 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:56.470 16:35:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:56.470 16:35:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:56.470 16:35:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:56.470 16:35:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:56.470 16:35:33 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:56.470 00:15:56.470 real 0m5.947s 00:15:56.470 user 0m19.861s 00:15:56.470 sys 0m1.327s 00:15:56.470 16:35:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:56.470 16:35:33 -- common/autotest_common.sh@10 -- # set +x 00:15:56.470 ************************************ 00:15:56.470 END TEST nvmf_nmic 00:15:56.470 ************************************ 00:15:56.471 16:35:33 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:56.471 16:35:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:56.471 16:35:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:56.471 16:35:33 -- common/autotest_common.sh@10 -- # set +x 00:15:56.729 ************************************ 00:15:56.729 START TEST nvmf_fio_target 00:15:56.729 ************************************ 00:15:56.729 16:35:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:56.729 * Looking for test storage... 00:15:56.729 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:56.729 16:35:34 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:56.729 16:35:34 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:56.729 16:35:34 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:56.729 16:35:34 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:56.729 16:35:34 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:56.729 16:35:34 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:56.730 16:35:34 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:56.730 16:35:34 -- scripts/common.sh@335 -- # IFS=.-: 00:15:56.730 16:35:34 -- scripts/common.sh@335 -- # read -ra ver1 00:15:56.730 16:35:34 -- scripts/common.sh@336 -- # IFS=.-: 00:15:56.730 16:35:34 -- scripts/common.sh@336 -- # read -ra ver2 00:15:56.730 16:35:34 -- scripts/common.sh@337 -- # local 'op=<' 00:15:56.730 16:35:34 -- scripts/common.sh@339 -- # ver1_l=2 00:15:56.730 16:35:34 -- scripts/common.sh@340 -- # ver2_l=1 00:15:56.730 16:35:34 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:56.730 16:35:34 -- scripts/common.sh@343 -- # case "$op" in 00:15:56.730 16:35:34 -- scripts/common.sh@344 -- # : 1 00:15:56.730 16:35:34 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:56.730 16:35:34 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:56.730 16:35:34 -- scripts/common.sh@364 -- # decimal 1 00:15:56.730 16:35:34 -- scripts/common.sh@352 -- # local d=1 00:15:56.730 16:35:34 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:56.730 16:35:34 -- scripts/common.sh@354 -- # echo 1 00:15:56.730 16:35:34 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:56.730 16:35:34 -- scripts/common.sh@365 -- # decimal 2 00:15:56.730 16:35:34 -- scripts/common.sh@352 -- # local d=2 00:15:56.730 16:35:34 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:56.730 16:35:34 -- scripts/common.sh@354 -- # echo 2 00:15:56.730 16:35:34 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:56.730 16:35:34 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:56.730 16:35:34 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:56.730 16:35:34 -- scripts/common.sh@367 -- # return 0 00:15:56.730 16:35:34 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:56.730 16:35:34 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:56.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:56.730 --rc genhtml_branch_coverage=1 00:15:56.730 --rc genhtml_function_coverage=1 00:15:56.730 --rc genhtml_legend=1 00:15:56.730 --rc geninfo_all_blocks=1 00:15:56.730 --rc geninfo_unexecuted_blocks=1 00:15:56.730 00:15:56.730 ' 00:15:56.730 16:35:34 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:56.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:56.730 --rc genhtml_branch_coverage=1 00:15:56.730 --rc genhtml_function_coverage=1 00:15:56.730 --rc genhtml_legend=1 00:15:56.730 --rc geninfo_all_blocks=1 00:15:56.730 --rc geninfo_unexecuted_blocks=1 00:15:56.730 00:15:56.730 ' 00:15:56.730 16:35:34 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:56.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:56.730 --rc genhtml_branch_coverage=1 00:15:56.730 --rc genhtml_function_coverage=1 00:15:56.730 --rc genhtml_legend=1 00:15:56.730 --rc geninfo_all_blocks=1 00:15:56.730 --rc geninfo_unexecuted_blocks=1 00:15:56.730 00:15:56.730 ' 00:15:56.730 16:35:34 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:56.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:56.730 --rc genhtml_branch_coverage=1 00:15:56.730 --rc genhtml_function_coverage=1 00:15:56.730 --rc genhtml_legend=1 00:15:56.730 --rc geninfo_all_blocks=1 00:15:56.730 --rc geninfo_unexecuted_blocks=1 00:15:56.730 00:15:56.730 ' 00:15:56.730 16:35:34 -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:56.730 16:35:34 -- nvmf/common.sh@7 -- # uname -s 00:15:56.730 16:35:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:56.730 16:35:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:56.730 16:35:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:56.730 16:35:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:56.730 16:35:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:56.730 16:35:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:56.730 16:35:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:56.730 16:35:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:56.730 16:35:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:56.730 16:35:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:56.730 16:35:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:15:56.730 16:35:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:15:56.730 16:35:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:56.730 16:35:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:56.730 16:35:34 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:56.730 16:35:34 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:56.730 16:35:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:56.730 16:35:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:56.730 16:35:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:56.730 16:35:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.730 16:35:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.730 16:35:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.730 16:35:34 -- paths/export.sh@5 -- # export PATH 00:15:56.730 16:35:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.730 16:35:34 -- nvmf/common.sh@46 -- # : 0 00:15:56.730 16:35:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:56.730 16:35:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:56.730 16:35:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:56.730 16:35:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:56.730 16:35:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:56.730 16:35:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:56.730 16:35:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:56.730 16:35:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:56.730 16:35:34 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:56.730 16:35:34 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:56.730 16:35:34 -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:56.730 16:35:34 -- target/fio.sh@16 -- # nvmftestinit 00:15:56.730 16:35:34 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:56.730 16:35:34 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:56.730 16:35:34 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:56.730 16:35:34 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:56.730 16:35:34 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:56.730 16:35:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:56.730 16:35:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:56.730 16:35:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:56.730 16:35:34 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:56.730 16:35:34 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:56.730 16:35:34 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:56.730 16:35:34 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:56.730 16:35:34 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:56.730 16:35:34 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:56.730 16:35:34 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:56.730 16:35:34 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:56.730 16:35:34 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:56.730 16:35:34 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:56.730 16:35:34 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:56.730 16:35:34 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:56.730 16:35:34 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:56.730 16:35:34 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:56.730 16:35:34 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:56.730 16:35:34 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:56.730 16:35:34 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:56.730 16:35:34 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:56.730 16:35:34 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:56.730 16:35:34 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:56.730 Cannot find device "nvmf_tgt_br" 00:15:56.730 16:35:34 -- nvmf/common.sh@154 -- # true 00:15:56.730 16:35:34 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:56.730 Cannot find device "nvmf_tgt_br2" 00:15:56.730 16:35:34 -- nvmf/common.sh@155 -- # true 00:15:56.730 16:35:34 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:56.730 16:35:34 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:56.989 Cannot find device "nvmf_tgt_br" 00:15:56.989 16:35:34 -- nvmf/common.sh@157 -- # true 00:15:56.989 16:35:34 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:56.989 Cannot find device "nvmf_tgt_br2" 00:15:56.989 16:35:34 -- nvmf/common.sh@158 -- # true 00:15:56.989 16:35:34 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:56.989 16:35:34 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:56.989 16:35:34 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:56.989 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:56.989 16:35:34 -- nvmf/common.sh@161 -- # true 00:15:56.989 16:35:34 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:56.989 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:56.989 16:35:34 -- nvmf/common.sh@162 -- # true 00:15:56.989 16:35:34 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:56.989 16:35:34 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:56.989 16:35:34 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:56.989 16:35:34 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:56.989 16:35:34 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:56.989 16:35:34 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:56.989 16:35:34 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:56.989 16:35:34 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:56.989 16:35:34 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:56.989 16:35:34 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:56.989 16:35:34 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:56.989 16:35:34 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:56.989 16:35:34 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:56.989 16:35:34 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:56.989 16:35:34 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:56.989 16:35:34 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:56.989 16:35:34 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:56.989 16:35:34 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:56.989 16:35:34 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:56.989 16:35:34 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:56.989 16:35:34 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:56.989 16:35:34 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:56.989 16:35:34 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:56.989 16:35:34 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:56.989 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:56.989 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:15:56.989 00:15:56.989 --- 10.0.0.2 ping statistics --- 00:15:56.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.989 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:15:56.989 16:35:34 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:56.989 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:56.989 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.028 ms 00:15:56.989 00:15:56.989 --- 10.0.0.3 ping statistics --- 00:15:56.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.989 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:15:56.989 16:35:34 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:56.990 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:56.990 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 00:15:56.990 00:15:56.990 --- 10.0.0.1 ping statistics --- 00:15:56.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.990 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 00:15:56.990 16:35:34 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:56.990 16:35:34 -- nvmf/common.sh@421 -- # return 0 00:15:56.990 16:35:34 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:56.990 16:35:34 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:56.990 16:35:34 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:56.990 16:35:34 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:56.990 16:35:34 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:56.990 16:35:34 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:56.990 16:35:34 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:57.248 16:35:34 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:15:57.248 16:35:34 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:57.248 16:35:34 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:57.248 16:35:34 -- common/autotest_common.sh@10 -- # set +x 00:15:57.248 16:35:34 -- nvmf/common.sh@469 -- # nvmfpid=87106 00:15:57.248 16:35:34 -- nvmf/common.sh@470 -- # waitforlisten 87106 00:15:57.248 16:35:34 -- common/autotest_common.sh@829 -- # '[' -z 87106 ']' 00:15:57.248 16:35:34 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:57.248 16:35:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:57.248 16:35:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:57.248 16:35:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:57.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:57.248 16:35:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:57.248 16:35:34 -- common/autotest_common.sh@10 -- # set +x 00:15:57.248 [2024-11-16 16:35:34.550032] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:57.248 [2024-11-16 16:35:34.550108] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:57.248 [2024-11-16 16:35:34.682584] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:57.507 [2024-11-16 16:35:34.749666] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:57.507 [2024-11-16 16:35:34.749815] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:57.507 [2024-11-16 16:35:34.749829] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:57.507 [2024-11-16 16:35:34.749837] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:57.507 [2024-11-16 16:35:34.750199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:57.507 [2024-11-16 16:35:34.750428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:57.507 [2024-11-16 16:35:34.750580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:57.507 [2024-11-16 16:35:34.750582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.074 16:35:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:58.074 16:35:35 -- common/autotest_common.sh@862 -- # return 0 00:15:58.074 16:35:35 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:58.074 16:35:35 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:58.074 16:35:35 -- common/autotest_common.sh@10 -- # set +x 00:15:58.074 16:35:35 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:58.074 16:35:35 -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:58.331 [2024-11-16 16:35:35.691525] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:58.332 16:35:35 -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:58.590 16:35:36 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:15:58.590 16:35:36 -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:59.156 16:35:36 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:15:59.156 16:35:36 -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:59.156 16:35:36 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:15:59.156 16:35:36 -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:59.414 16:35:36 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:15:59.414 16:35:36 -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:15:59.673 16:35:37 -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:59.931 16:35:37 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:15:59.932 16:35:37 -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:00.197 16:35:37 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:16:00.197 16:35:37 -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:00.456 16:35:37 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:16:00.456 16:35:37 -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:16:00.715 16:35:38 -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:00.974 16:35:38 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:00.974 16:35:38 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:01.233 16:35:38 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:01.233 16:35:38 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:01.492 16:35:38 -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:01.751 [2024-11-16 16:35:39.031799] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:01.751 16:35:39 -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:16:02.010 16:35:39 -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:16:02.010 16:35:39 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 --hostid=dcaf3c85-349e-474a-91c8-b5dfcb47b007 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:02.269 16:35:39 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:16:02.269 16:35:39 -- common/autotest_common.sh@1187 -- # local i=0 00:16:02.269 16:35:39 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:16:02.269 16:35:39 -- common/autotest_common.sh@1189 -- # [[ -n 4 ]] 00:16:02.269 16:35:39 -- common/autotest_common.sh@1190 -- # nvme_device_counter=4 00:16:02.269 16:35:39 -- common/autotest_common.sh@1194 -- # sleep 2 00:16:04.174 16:35:41 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:16:04.174 16:35:41 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:16:04.174 16:35:41 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:16:04.432 16:35:41 -- common/autotest_common.sh@1196 -- # nvme_devices=4 00:16:04.432 16:35:41 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:16:04.432 16:35:41 -- common/autotest_common.sh@1197 -- # return 0 00:16:04.432 16:35:41 -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:04.432 [global] 00:16:04.432 thread=1 00:16:04.432 invalidate=1 00:16:04.432 rw=write 00:16:04.432 time_based=1 00:16:04.432 runtime=1 00:16:04.432 ioengine=libaio 00:16:04.432 direct=1 00:16:04.432 bs=4096 00:16:04.432 iodepth=1 00:16:04.432 norandommap=0 00:16:04.432 numjobs=1 00:16:04.432 00:16:04.432 verify_dump=1 00:16:04.432 verify_backlog=512 00:16:04.432 verify_state_save=0 00:16:04.432 do_verify=1 00:16:04.432 verify=crc32c-intel 00:16:04.432 [job0] 00:16:04.432 filename=/dev/nvme0n1 00:16:04.432 [job1] 00:16:04.432 filename=/dev/nvme0n2 00:16:04.432 [job2] 00:16:04.432 filename=/dev/nvme0n3 00:16:04.432 [job3] 00:16:04.432 filename=/dev/nvme0n4 00:16:04.432 Could not set queue depth (nvme0n1) 00:16:04.432 Could not set queue depth (nvme0n2) 00:16:04.432 Could not set queue depth (nvme0n3) 00:16:04.432 Could not set queue depth (nvme0n4) 00:16:04.432 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:04.432 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:04.432 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:04.432 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:04.432 fio-3.35 00:16:04.432 Starting 4 threads 00:16:05.811 00:16:05.811 job0: (groupid=0, jobs=1): err= 0: pid=87401: Sat Nov 16 16:35:43 2024 00:16:05.811 read: IOPS=2094, BW=8380KiB/s (8581kB/s)(8388KiB/1001msec) 00:16:05.811 slat (nsec): min=12601, max=55587, avg=15490.22, stdev=4563.25 00:16:05.811 clat (usec): min=146, max=334, avg=212.41, stdev=24.63 00:16:05.811 lat (usec): min=160, max=348, avg=227.90, stdev=25.27 00:16:05.811 clat percentiles (usec): 00:16:05.811 | 1.00th=[ 163], 5.00th=[ 176], 10.00th=[ 184], 20.00th=[ 192], 00:16:05.811 | 30.00th=[ 200], 40.00th=[ 204], 50.00th=[ 210], 60.00th=[ 217], 00:16:05.811 | 70.00th=[ 223], 80.00th=[ 231], 90.00th=[ 245], 95.00th=[ 255], 00:16:05.811 | 99.00th=[ 289], 99.50th=[ 297], 99.90th=[ 306], 99.95th=[ 310], 00:16:05.811 | 99.99th=[ 334] 00:16:05.811 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:16:05.811 slat (usec): min=18, max=130, avg=24.51, stdev= 7.53 00:16:05.811 clat (usec): min=100, max=1691, avg=176.59, stdev=46.85 00:16:05.811 lat (usec): min=120, max=1714, avg=201.10, stdev=47.82 00:16:05.811 clat percentiles (usec): 00:16:05.811 | 1.00th=[ 116], 5.00th=[ 130], 10.00th=[ 143], 20.00th=[ 155], 00:16:05.811 | 30.00th=[ 163], 40.00th=[ 169], 50.00th=[ 176], 60.00th=[ 182], 00:16:05.811 | 70.00th=[ 188], 80.00th=[ 196], 90.00th=[ 208], 95.00th=[ 221], 00:16:05.811 | 99.00th=[ 253], 99.50th=[ 262], 99.90th=[ 449], 99.95th=[ 1336], 00:16:05.811 | 99.99th=[ 1696] 00:16:05.811 bw ( KiB/s): min= 9912, max= 9912, per=26.07%, avg=9912.00, stdev= 0.00, samples=1 00:16:05.811 iops : min= 2478, max= 2478, avg=2478.00, stdev= 0.00, samples=1 00:16:05.811 lat (usec) : 250=96.01%, 500=3.95% 00:16:05.811 lat (msec) : 2=0.04% 00:16:05.811 cpu : usr=1.50%, sys=6.90%, ctx=4657, majf=0, minf=5 00:16:05.811 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:05.811 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:05.811 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:05.811 issued rwts: total=2097,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:05.811 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:05.811 job1: (groupid=0, jobs=1): err= 0: pid=87402: Sat Nov 16 16:35:43 2024 00:16:05.811 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:16:05.811 slat (nsec): min=11927, max=57843, avg=15654.07, stdev=4204.13 00:16:05.811 clat (usec): min=126, max=335, avg=222.08, stdev=31.83 00:16:05.811 lat (usec): min=140, max=355, avg=237.74, stdev=32.17 00:16:05.811 clat percentiles (usec): 00:16:05.811 | 1.00th=[ 149], 5.00th=[ 176], 10.00th=[ 186], 20.00th=[ 198], 00:16:05.811 | 30.00th=[ 204], 40.00th=[ 212], 50.00th=[ 221], 60.00th=[ 227], 00:16:05.811 | 70.00th=[ 237], 80.00th=[ 247], 90.00th=[ 265], 95.00th=[ 281], 00:16:05.811 | 99.00th=[ 310], 99.50th=[ 318], 99.90th=[ 326], 99.95th=[ 334], 00:16:05.811 | 99.99th=[ 334] 00:16:05.811 write: IOPS=2295, BW=9183KiB/s (9403kB/s)(9192KiB/1001msec); 0 zone resets 00:16:05.811 slat (usec): min=18, max=131, avg=25.44, stdev= 7.43 00:16:05.811 clat (usec): min=99, max=3190, avg=194.75, stdev=72.10 00:16:05.811 lat (usec): min=118, max=3215, avg=220.19, stdev=72.64 00:16:05.811 clat percentiles (usec): 00:16:05.811 | 1.00th=[ 121], 5.00th=[ 147], 10.00th=[ 157], 20.00th=[ 169], 00:16:05.811 | 30.00th=[ 176], 40.00th=[ 184], 50.00th=[ 190], 60.00th=[ 198], 00:16:05.811 | 70.00th=[ 206], 80.00th=[ 217], 90.00th=[ 235], 95.00th=[ 251], 00:16:05.811 | 99.00th=[ 281], 99.50th=[ 293], 99.90th=[ 619], 99.95th=[ 701], 00:16:05.811 | 99.99th=[ 3195] 00:16:05.811 bw ( KiB/s): min= 8952, max= 8952, per=23.55%, avg=8952.00, stdev= 0.00, samples=1 00:16:05.811 iops : min= 2238, max= 2238, avg=2238.00, stdev= 0.00, samples=1 00:16:05.811 lat (usec) : 100=0.02%, 250=88.79%, 500=11.07%, 750=0.09% 00:16:05.811 lat (msec) : 4=0.02% 00:16:05.811 cpu : usr=1.10%, sys=6.90%, ctx=4350, majf=0, minf=11 00:16:05.811 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:05.811 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:05.811 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:05.811 issued rwts: total=2048,2298,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:05.811 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:05.811 job2: (groupid=0, jobs=1): err= 0: pid=87403: Sat Nov 16 16:35:43 2024 00:16:05.811 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:16:05.811 slat (nsec): min=11539, max=60607, avg=15265.70, stdev=4994.09 00:16:05.811 clat (usec): min=144, max=352, avg=226.11, stdev=33.46 00:16:05.811 lat (usec): min=156, max=365, avg=241.38, stdev=34.02 00:16:05.811 clat percentiles (usec): 00:16:05.811 | 1.00th=[ 157], 5.00th=[ 178], 10.00th=[ 188], 20.00th=[ 198], 00:16:05.811 | 30.00th=[ 206], 40.00th=[ 215], 50.00th=[ 223], 60.00th=[ 231], 00:16:05.811 | 70.00th=[ 241], 80.00th=[ 253], 90.00th=[ 273], 95.00th=[ 289], 00:16:05.811 | 99.00th=[ 314], 99.50th=[ 318], 99.90th=[ 338], 99.95th=[ 338], 00:16:05.811 | 99.99th=[ 355] 00:16:05.811 write: IOPS=2186, BW=8747KiB/s (8957kB/s)(8756KiB/1001msec); 0 zone resets 00:16:05.811 slat (nsec): min=16934, max=91246, avg=23168.86, stdev=7849.16 00:16:05.811 clat (usec): min=105, max=7570, avg=204.37, stdev=163.09 00:16:05.811 lat (usec): min=126, max=7589, avg=227.54, stdev=163.21 00:16:05.811 clat percentiles (usec): 00:16:05.811 | 1.00th=[ 133], 5.00th=[ 155], 10.00th=[ 165], 20.00th=[ 176], 00:16:05.811 | 30.00th=[ 182], 40.00th=[ 190], 50.00th=[ 198], 60.00th=[ 204], 00:16:05.811 | 70.00th=[ 215], 80.00th=[ 227], 90.00th=[ 243], 95.00th=[ 258], 00:16:05.811 | 99.00th=[ 285], 99.50th=[ 302], 99.90th=[ 537], 99.95th=[ 1483], 00:16:05.811 | 99.99th=[ 7570] 00:16:05.811 bw ( KiB/s): min= 8736, max= 8736, per=22.98%, avg=8736.00, stdev= 0.00, samples=1 00:16:05.811 iops : min= 2184, max= 2184, avg=2184.00, stdev= 0.00, samples=1 00:16:05.811 lat (usec) : 250=85.89%, 500=14.04%, 750=0.02% 00:16:05.811 lat (msec) : 2=0.02%, 10=0.02% 00:16:05.811 cpu : usr=1.70%, sys=6.00%, ctx=4240, majf=0, minf=7 00:16:05.811 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:05.811 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:05.811 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:05.811 issued rwts: total=2048,2189,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:05.811 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:05.811 job3: (groupid=0, jobs=1): err= 0: pid=87404: Sat Nov 16 16:35:43 2024 00:16:05.811 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:16:05.811 slat (nsec): min=13360, max=64314, avg=16589.49, stdev=4785.32 00:16:05.811 clat (usec): min=143, max=318, avg=215.84, stdev=26.68 00:16:05.811 lat (usec): min=159, max=334, avg=232.43, stdev=27.19 00:16:05.811 clat percentiles (usec): 00:16:05.811 | 1.00th=[ 157], 5.00th=[ 178], 10.00th=[ 186], 20.00th=[ 196], 00:16:05.811 | 30.00th=[ 202], 40.00th=[ 208], 50.00th=[ 212], 60.00th=[ 221], 00:16:05.811 | 70.00th=[ 227], 80.00th=[ 237], 90.00th=[ 251], 95.00th=[ 265], 00:16:05.811 | 99.00th=[ 297], 99.50th=[ 306], 99.90th=[ 318], 99.95th=[ 318], 00:16:05.811 | 99.99th=[ 318] 00:16:05.811 write: IOPS=2464, BW=9858KiB/s (10.1MB/s)(9868KiB/1001msec); 0 zone resets 00:16:05.811 slat (usec): min=19, max=105, avg=26.04, stdev= 7.32 00:16:05.811 clat (usec): min=105, max=310, avg=183.35, stdev=28.26 00:16:05.811 lat (usec): min=128, max=338, avg=209.40, stdev=29.37 00:16:05.811 clat percentiles (usec): 00:16:05.811 | 1.00th=[ 125], 5.00th=[ 141], 10.00th=[ 151], 20.00th=[ 161], 00:16:05.811 | 30.00th=[ 169], 40.00th=[ 176], 50.00th=[ 182], 60.00th=[ 188], 00:16:05.811 | 70.00th=[ 196], 80.00th=[ 206], 90.00th=[ 221], 95.00th=[ 235], 00:16:05.811 | 99.00th=[ 258], 99.50th=[ 269], 99.90th=[ 310], 99.95th=[ 310], 00:16:05.811 | 99.99th=[ 310] 00:16:05.812 bw ( KiB/s): min= 9576, max= 9576, per=25.19%, avg=9576.00, stdev= 0.00, samples=1 00:16:05.812 iops : min= 2394, max= 2394, avg=2394.00, stdev= 0.00, samples=1 00:16:05.812 lat (usec) : 250=94.13%, 500=5.87% 00:16:05.812 cpu : usr=1.70%, sys=6.90%, ctx=4515, majf=0, minf=14 00:16:05.812 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:05.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:05.812 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:05.812 issued rwts: total=2048,2467,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:05.812 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:05.812 00:16:05.812 Run status group 0 (all jobs): 00:16:05.812 READ: bw=32.2MiB/s (33.7MB/s), 8184KiB/s-8380KiB/s (8380kB/s-8581kB/s), io=32.2MiB (33.8MB), run=1001-1001msec 00:16:05.812 WRITE: bw=37.1MiB/s (38.9MB/s), 8747KiB/s-9.99MiB/s (8957kB/s-10.5MB/s), io=37.2MiB (39.0MB), run=1001-1001msec 00:16:05.812 00:16:05.812 Disk stats (read/write): 00:16:05.812 nvme0n1: ios=1924/2048, merge=0/0, ticks=489/387, in_queue=876, util=91.57% 00:16:05.812 nvme0n2: ios=1651/2048, merge=0/0, ticks=368/418, in_queue=786, util=86.82% 00:16:05.812 nvme0n3: ios=1648/2048, merge=0/0, ticks=454/428, in_queue=882, util=91.64% 00:16:05.812 nvme0n4: ios=1770/2048, merge=0/0, ticks=397/399, in_queue=796, util=89.59% 00:16:05.812 16:35:43 -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:16:05.812 [global] 00:16:05.812 thread=1 00:16:05.812 invalidate=1 00:16:05.812 rw=randwrite 00:16:05.812 time_based=1 00:16:05.812 runtime=1 00:16:05.812 ioengine=libaio 00:16:05.812 direct=1 00:16:05.812 bs=4096 00:16:05.812 iodepth=1 00:16:05.812 norandommap=0 00:16:05.812 numjobs=1 00:16:05.812 00:16:05.812 verify_dump=1 00:16:05.812 verify_backlog=512 00:16:05.812 verify_state_save=0 00:16:05.812 do_verify=1 00:16:05.812 verify=crc32c-intel 00:16:05.812 [job0] 00:16:05.812 filename=/dev/nvme0n1 00:16:05.812 [job1] 00:16:05.812 filename=/dev/nvme0n2 00:16:05.812 [job2] 00:16:05.812 filename=/dev/nvme0n3 00:16:05.812 [job3] 00:16:05.812 filename=/dev/nvme0n4 00:16:05.812 Could not set queue depth (nvme0n1) 00:16:05.812 Could not set queue depth (nvme0n2) 00:16:05.812 Could not set queue depth (nvme0n3) 00:16:05.812 Could not set queue depth (nvme0n4) 00:16:05.812 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:05.812 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:05.812 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:05.812 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:05.812 fio-3.35 00:16:05.812 Starting 4 threads 00:16:07.191 00:16:07.191 job0: (groupid=0, jobs=1): err= 0: pid=87458: Sat Nov 16 16:35:44 2024 00:16:07.191 read: IOPS=1263, BW=5055KiB/s (5176kB/s)(5060KiB/1001msec) 00:16:07.191 slat (nsec): min=7714, max=81818, avg=24956.29, stdev=9502.70 00:16:07.191 clat (usec): min=197, max=1036, avg=351.22, stdev=45.47 00:16:07.191 lat (usec): min=207, max=1048, avg=376.18, stdev=44.07 00:16:07.191 clat percentiles (usec): 00:16:07.191 | 1.00th=[ 277], 5.00th=[ 297], 10.00th=[ 306], 20.00th=[ 318], 00:16:07.191 | 30.00th=[ 330], 40.00th=[ 338], 50.00th=[ 347], 60.00th=[ 355], 00:16:07.191 | 70.00th=[ 363], 80.00th=[ 379], 90.00th=[ 404], 95.00th=[ 424], 00:16:07.191 | 99.00th=[ 486], 99.50th=[ 519], 99.90th=[ 660], 99.95th=[ 1037], 00:16:07.191 | 99.99th=[ 1037] 00:16:07.191 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:16:07.191 slat (usec): min=10, max=109, avg=37.12, stdev=10.25 00:16:07.191 clat (usec): min=146, max=51806, avg=299.20, stdev=1315.89 00:16:07.191 lat (usec): min=174, max=51841, avg=336.33, stdev=1315.85 00:16:07.191 clat percentiles (usec): 00:16:07.191 | 1.00th=[ 182], 5.00th=[ 210], 10.00th=[ 219], 20.00th=[ 229], 00:16:07.191 | 30.00th=[ 239], 40.00th=[ 251], 50.00th=[ 260], 60.00th=[ 269], 00:16:07.191 | 70.00th=[ 281], 80.00th=[ 293], 90.00th=[ 322], 95.00th=[ 371], 00:16:07.191 | 99.00th=[ 404], 99.50th=[ 433], 99.90th=[ 461], 99.95th=[51643], 00:16:07.191 | 99.99th=[51643] 00:16:07.191 bw ( KiB/s): min= 7984, max= 7984, per=27.36%, avg=7984.00, stdev= 0.00, samples=1 00:16:07.191 iops : min= 1996, max= 1996, avg=1996.00, stdev= 0.00, samples=1 00:16:07.191 lat (usec) : 250=22.13%, 500=77.47%, 750=0.32% 00:16:07.191 lat (msec) : 2=0.04%, 100=0.04% 00:16:07.191 cpu : usr=2.00%, sys=6.20%, ctx=2802, majf=0, minf=13 00:16:07.191 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:07.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:07.191 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:07.191 issued rwts: total=1265,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:07.191 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:07.191 job1: (groupid=0, jobs=1): err= 0: pid=87459: Sat Nov 16 16:35:44 2024 00:16:07.191 read: IOPS=1817, BW=7269KiB/s (7443kB/s)(7276KiB/1001msec) 00:16:07.191 slat (nsec): min=9659, max=59991, avg=17136.46, stdev=5861.52 00:16:07.191 clat (usec): min=149, max=7557, avg=266.99, stdev=183.79 00:16:07.191 lat (usec): min=162, max=7572, avg=284.12, stdev=183.46 00:16:07.191 clat percentiles (usec): 00:16:07.191 | 1.00th=[ 165], 5.00th=[ 186], 10.00th=[ 198], 20.00th=[ 212], 00:16:07.191 | 30.00th=[ 227], 40.00th=[ 237], 50.00th=[ 247], 60.00th=[ 260], 00:16:07.191 | 70.00th=[ 277], 80.00th=[ 302], 90.00th=[ 363], 95.00th=[ 396], 00:16:07.191 | 99.00th=[ 482], 99.50th=[ 519], 99.90th=[ 824], 99.95th=[ 7570], 00:16:07.191 | 99.99th=[ 7570] 00:16:07.191 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:16:07.191 slat (nsec): min=13018, max=90449, avg=27030.40, stdev=8022.15 00:16:07.191 clat (usec): min=112, max=2501, avg=205.13, stdev=74.90 00:16:07.191 lat (usec): min=136, max=2524, avg=232.16, stdev=75.60 00:16:07.191 clat percentiles (usec): 00:16:07.191 | 1.00th=[ 128], 5.00th=[ 153], 10.00th=[ 161], 20.00th=[ 174], 00:16:07.191 | 30.00th=[ 184], 40.00th=[ 192], 50.00th=[ 200], 60.00th=[ 208], 00:16:07.191 | 70.00th=[ 217], 80.00th=[ 229], 90.00th=[ 249], 95.00th=[ 265], 00:16:07.191 | 99.00th=[ 330], 99.50th=[ 379], 99.90th=[ 750], 99.95th=[ 1942], 00:16:07.191 | 99.99th=[ 2507] 00:16:07.191 bw ( KiB/s): min= 8192, max= 8192, per=28.07%, avg=8192.00, stdev= 0.00, samples=1 00:16:07.191 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:07.191 lat (usec) : 250=72.72%, 500=26.87%, 750=0.28%, 1000=0.05% 00:16:07.191 lat (msec) : 2=0.03%, 4=0.03%, 10=0.03% 00:16:07.191 cpu : usr=1.70%, sys=6.40%, ctx=3868, majf=0, minf=15 00:16:07.191 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:07.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:07.191 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:07.191 issued rwts: total=1819,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:07.191 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:07.191 job2: (groupid=0, jobs=1): err= 0: pid=87460: Sat Nov 16 16:35:44 2024 00:16:07.191 read: IOPS=1960, BW=7840KiB/s (8028kB/s)(7848KiB/1001msec) 00:16:07.191 slat (nsec): min=11682, max=59151, avg=15949.85, stdev=5606.08 00:16:07.191 clat (usec): min=141, max=2380, avg=247.14, stdev=81.11 00:16:07.191 lat (usec): min=154, max=2408, avg=263.09, stdev=81.54 00:16:07.191 clat percentiles (usec): 00:16:07.192 | 1.00th=[ 159], 5.00th=[ 180], 10.00th=[ 194], 20.00th=[ 210], 00:16:07.192 | 30.00th=[ 223], 40.00th=[ 233], 50.00th=[ 243], 60.00th=[ 251], 00:16:07.192 | 70.00th=[ 262], 80.00th=[ 277], 90.00th=[ 297], 95.00th=[ 314], 00:16:07.192 | 99.00th=[ 375], 99.50th=[ 396], 99.90th=[ 1680], 99.95th=[ 2376], 00:16:07.192 | 99.99th=[ 2376] 00:16:07.192 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:16:07.192 slat (nsec): min=17599, max=77647, avg=24313.99, stdev=7774.50 00:16:07.192 clat (usec): min=115, max=389, avg=208.52, stdev=33.32 00:16:07.192 lat (usec): min=135, max=429, avg=232.83, stdev=34.95 00:16:07.192 clat percentiles (usec): 00:16:07.192 | 1.00th=[ 147], 5.00th=[ 159], 10.00th=[ 167], 20.00th=[ 182], 00:16:07.192 | 30.00th=[ 192], 40.00th=[ 200], 50.00th=[ 206], 60.00th=[ 215], 00:16:07.192 | 70.00th=[ 223], 80.00th=[ 235], 90.00th=[ 251], 95.00th=[ 265], 00:16:07.192 | 99.00th=[ 306], 99.50th=[ 322], 99.90th=[ 367], 99.95th=[ 371], 00:16:07.192 | 99.99th=[ 392] 00:16:07.192 bw ( KiB/s): min= 8192, max= 8192, per=28.07%, avg=8192.00, stdev= 0.00, samples=1 00:16:07.192 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:07.192 lat (usec) : 250=74.51%, 500=25.31%, 750=0.07% 00:16:07.192 lat (msec) : 2=0.07%, 4=0.02% 00:16:07.192 cpu : usr=1.30%, sys=6.20%, ctx=4010, majf=0, minf=10 00:16:07.192 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:07.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:07.192 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:07.192 issued rwts: total=1962,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:07.192 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:07.192 job3: (groupid=0, jobs=1): err= 0: pid=87461: Sat Nov 16 16:35:44 2024 00:16:07.192 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:16:07.192 slat (nsec): min=12906, max=68443, avg=16712.23, stdev=5830.22 00:16:07.192 clat (usec): min=161, max=547, avg=314.22, stdev=71.15 00:16:07.192 lat (usec): min=175, max=578, avg=330.93, stdev=71.95 00:16:07.192 clat percentiles (usec): 00:16:07.192 | 1.00th=[ 182], 5.00th=[ 196], 10.00th=[ 206], 20.00th=[ 229], 00:16:07.192 | 30.00th=[ 273], 40.00th=[ 318], 50.00th=[ 334], 60.00th=[ 347], 00:16:07.192 | 70.00th=[ 359], 80.00th=[ 371], 90.00th=[ 392], 95.00th=[ 420], 00:16:07.192 | 99.00th=[ 457], 99.50th=[ 469], 99.90th=[ 537], 99.95th=[ 545], 00:16:07.192 | 99.99th=[ 545] 00:16:07.192 write: IOPS=1669, BW=6677KiB/s (6838kB/s)(6684KiB/1001msec); 0 zone resets 00:16:07.192 slat (nsec): min=19062, max=97551, avg=35490.67, stdev=10422.47 00:16:07.192 clat (usec): min=117, max=751, avg=254.62, stdev=56.24 00:16:07.192 lat (usec): min=137, max=784, avg=290.11, stdev=58.41 00:16:07.192 clat percentiles (usec): 00:16:07.192 | 1.00th=[ 139], 5.00th=[ 157], 10.00th=[ 188], 20.00th=[ 219], 00:16:07.192 | 30.00th=[ 231], 40.00th=[ 241], 50.00th=[ 251], 60.00th=[ 262], 00:16:07.192 | 70.00th=[ 273], 80.00th=[ 289], 90.00th=[ 326], 95.00th=[ 371], 00:16:07.192 | 99.00th=[ 404], 99.50th=[ 416], 99.90th=[ 449], 99.95th=[ 750], 00:16:07.192 | 99.99th=[ 750] 00:16:07.192 bw ( KiB/s): min= 7984, max= 7984, per=27.36%, avg=7984.00, stdev= 0.00, samples=1 00:16:07.192 iops : min= 1996, max= 1996, avg=1996.00, stdev= 0.00, samples=1 00:16:07.192 lat (usec) : 250=37.82%, 500=62.05%, 750=0.09%, 1000=0.03% 00:16:07.192 cpu : usr=1.60%, sys=5.90%, ctx=3207, majf=0, minf=10 00:16:07.192 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:07.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:07.192 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:07.192 issued rwts: total=1536,1671,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:07.192 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:07.192 00:16:07.192 Run status group 0 (all jobs): 00:16:07.192 READ: bw=25.7MiB/s (26.9MB/s), 5055KiB/s-7840KiB/s (5176kB/s-8028kB/s), io=25.7MiB (27.0MB), run=1001-1001msec 00:16:07.192 WRITE: bw=28.5MiB/s (29.9MB/s), 6138KiB/s-8184KiB/s (6285kB/s-8380kB/s), io=28.5MiB (29.9MB), run=1001-1001msec 00:16:07.192 00:16:07.192 Disk stats (read/write): 00:16:07.192 nvme0n1: ios=1073/1452, merge=0/0, ticks=429/401, in_queue=830, util=89.87% 00:16:07.192 nvme0n2: ios=1549/1953, merge=0/0, ticks=402/425, in_queue=827, util=88.01% 00:16:07.192 nvme0n3: ios=1557/1914, merge=0/0, ticks=444/424, in_queue=868, util=90.10% 00:16:07.192 nvme0n4: ios=1092/1536, merge=0/0, ticks=391/419, in_queue=810, util=89.84% 00:16:07.192 16:35:44 -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:16:07.192 [global] 00:16:07.192 thread=1 00:16:07.192 invalidate=1 00:16:07.192 rw=write 00:16:07.192 time_based=1 00:16:07.192 runtime=1 00:16:07.192 ioengine=libaio 00:16:07.192 direct=1 00:16:07.192 bs=4096 00:16:07.192 iodepth=128 00:16:07.192 norandommap=0 00:16:07.192 numjobs=1 00:16:07.192 00:16:07.192 verify_dump=1 00:16:07.192 verify_backlog=512 00:16:07.192 verify_state_save=0 00:16:07.192 do_verify=1 00:16:07.192 verify=crc32c-intel 00:16:07.192 [job0] 00:16:07.192 filename=/dev/nvme0n1 00:16:07.192 [job1] 00:16:07.192 filename=/dev/nvme0n2 00:16:07.192 [job2] 00:16:07.192 filename=/dev/nvme0n3 00:16:07.192 [job3] 00:16:07.192 filename=/dev/nvme0n4 00:16:07.192 Could not set queue depth (nvme0n1) 00:16:07.192 Could not set queue depth (nvme0n2) 00:16:07.192 Could not set queue depth (nvme0n3) 00:16:07.192 Could not set queue depth (nvme0n4) 00:16:07.192 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:07.192 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:07.192 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:07.192 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:07.192 fio-3.35 00:16:07.192 Starting 4 threads 00:16:08.572 00:16:08.572 job0: (groupid=0, jobs=1): err= 0: pid=87522: Sat Nov 16 16:35:45 2024 00:16:08.572 read: IOPS=2727, BW=10.7MiB/s (11.2MB/s)(10.7MiB/1005msec) 00:16:08.572 slat (usec): min=3, max=10726, avg=159.70, stdev=859.76 00:16:08.572 clat (usec): min=2054, max=50448, avg=19896.24, stdev=4399.50 00:16:08.572 lat (usec): min=10859, max=50471, avg=20055.95, stdev=4469.13 00:16:08.572 clat percentiles (usec): 00:16:08.572 | 1.00th=[11469], 5.00th=[14877], 10.00th=[16450], 20.00th=[17957], 00:16:08.572 | 30.00th=[18220], 40.00th=[18482], 50.00th=[19006], 60.00th=[19268], 00:16:08.572 | 70.00th=[20055], 80.00th=[21627], 90.00th=[24511], 95.00th=[27919], 00:16:08.572 | 99.00th=[38536], 99.50th=[44827], 99.90th=[50594], 99.95th=[50594], 00:16:08.572 | 99.99th=[50594] 00:16:08.572 write: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec); 0 zone resets 00:16:08.572 slat (usec): min=13, max=11371, avg=175.48, stdev=930.97 00:16:08.572 clat (usec): min=11664, max=64380, avg=23546.49, stdev=8974.54 00:16:08.572 lat (usec): min=11705, max=64424, avg=23721.97, stdev=9060.73 00:16:08.572 clat percentiles (usec): 00:16:08.572 | 1.00th=[13829], 5.00th=[16909], 10.00th=[18220], 20.00th=[19006], 00:16:08.572 | 30.00th=[19268], 40.00th=[19268], 50.00th=[19530], 60.00th=[20055], 00:16:08.572 | 70.00th=[22938], 80.00th=[25822], 90.00th=[36963], 95.00th=[44827], 00:16:08.572 | 99.00th=[55313], 99.50th=[57410], 99.90th=[64226], 99.95th=[64226], 00:16:08.572 | 99.99th=[64226] 00:16:08.572 bw ( KiB/s): min=10920, max=13656, per=20.75%, avg=12288.00, stdev=1934.64, samples=2 00:16:08.572 iops : min= 2730, max= 3414, avg=3072.00, stdev=483.66, samples=2 00:16:08.572 lat (msec) : 4=0.02%, 20=62.41%, 50=35.46%, 100=2.12% 00:16:08.572 cpu : usr=2.09%, sys=9.66%, ctx=275, majf=0, minf=9 00:16:08.572 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:16:08.572 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:08.572 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:08.572 issued rwts: total=2741,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:08.572 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:08.572 job1: (groupid=0, jobs=1): err= 0: pid=87523: Sat Nov 16 16:35:45 2024 00:16:08.572 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:16:08.572 slat (usec): min=9, max=4508, avg=100.51, stdev=518.88 00:16:08.572 clat (usec): min=9083, max=18565, avg=13367.56, stdev=1174.61 00:16:08.572 lat (usec): min=9098, max=19724, avg=13468.07, stdev=1194.57 00:16:08.572 clat percentiles (usec): 00:16:08.572 | 1.00th=[ 9896], 5.00th=[11338], 10.00th=[11994], 20.00th=[12649], 00:16:08.572 | 30.00th=[12911], 40.00th=[13173], 50.00th=[13304], 60.00th=[13698], 00:16:08.572 | 70.00th=[13960], 80.00th=[14222], 90.00th=[14746], 95.00th=[15139], 00:16:08.572 | 99.00th=[16188], 99.50th=[16909], 99.90th=[18220], 99.95th=[18220], 00:16:08.572 | 99.99th=[18482] 00:16:08.572 write: IOPS=4667, BW=18.2MiB/s (19.1MB/s)(18.3MiB/1003msec); 0 zone resets 00:16:08.572 slat (usec): min=11, max=4869, avg=106.80, stdev=548.56 00:16:08.572 clat (usec): min=2665, max=19506, avg=13879.81, stdev=2133.78 00:16:08.572 lat (usec): min=2688, max=19555, avg=13986.61, stdev=2098.45 00:16:08.572 clat percentiles (usec): 00:16:08.572 | 1.00th=[ 5997], 5.00th=[10421], 10.00th=[10945], 20.00th=[11994], 00:16:08.572 | 30.00th=[13566], 40.00th=[13960], 50.00th=[14353], 60.00th=[14746], 00:16:08.572 | 70.00th=[15139], 80.00th=[15533], 90.00th=[16188], 95.00th=[16319], 00:16:08.572 | 99.00th=[16909], 99.50th=[17171], 99.90th=[17171], 99.95th=[17171], 00:16:08.572 | 99.99th=[19530] 00:16:08.572 bw ( KiB/s): min=18240, max=18661, per=31.15%, avg=18450.50, stdev=297.69, samples=2 00:16:08.572 iops : min= 4560, max= 4665, avg=4612.50, stdev=74.25, samples=2 00:16:08.572 lat (msec) : 4=0.45%, 10=1.55%, 20=98.00% 00:16:08.572 cpu : usr=4.19%, sys=13.27%, ctx=455, majf=0, minf=10 00:16:08.572 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:16:08.572 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:08.572 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:08.572 issued rwts: total=4608,4682,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:08.572 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:08.572 job2: (groupid=0, jobs=1): err= 0: pid=87524: Sat Nov 16 16:35:45 2024 00:16:08.572 read: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec) 00:16:08.572 slat (usec): min=6, max=13845, avg=181.07, stdev=926.20 00:16:08.572 clat (usec): min=12036, max=83897, avg=22650.32, stdev=14259.05 00:16:08.572 lat (usec): min=13393, max=83929, avg=22831.38, stdev=14357.00 00:16:08.572 clat percentiles (usec): 00:16:08.572 | 1.00th=[13435], 5.00th=[14877], 10.00th=[15401], 20.00th=[15926], 00:16:08.572 | 30.00th=[16188], 40.00th=[16581], 50.00th=[16712], 60.00th=[17171], 00:16:08.572 | 70.00th=[17957], 80.00th=[22676], 90.00th=[37487], 95.00th=[57410], 00:16:08.572 | 99.00th=[80217], 99.50th=[80217], 99.90th=[84411], 99.95th=[84411], 00:16:08.572 | 99.99th=[84411] 00:16:08.572 write: IOPS=3017, BW=11.8MiB/s (12.4MB/s)(11.8MiB/1004msec); 0 zone resets 00:16:08.572 slat (usec): min=11, max=12833, avg=169.96, stdev=918.07 00:16:08.572 clat (usec): min=718, max=69821, avg=22600.80, stdev=11271.25 00:16:08.573 lat (usec): min=10800, max=69847, avg=22770.76, stdev=11306.59 00:16:08.573 clat percentiles (usec): 00:16:08.573 | 1.00th=[11731], 5.00th=[14353], 10.00th=[15139], 20.00th=[15795], 00:16:08.573 | 30.00th=[17171], 40.00th=[17695], 50.00th=[18744], 60.00th=[19268], 00:16:08.573 | 70.00th=[19530], 80.00th=[27395], 90.00th=[38011], 95.00th=[51119], 00:16:08.573 | 99.00th=[61604], 99.50th=[69731], 99.90th=[69731], 99.95th=[69731], 00:16:08.573 | 99.99th=[69731] 00:16:08.573 bw ( KiB/s): min= 6832, max=16416, per=19.63%, avg=11624.00, stdev=6776.91, samples=2 00:16:08.573 iops : min= 1708, max= 4104, avg=2906.00, stdev=1694.23, samples=2 00:16:08.573 lat (usec) : 750=0.02% 00:16:08.573 lat (msec) : 20=72.92%, 50=20.54%, 100=6.53% 00:16:08.573 cpu : usr=2.69%, sys=8.67%, ctx=339, majf=0, minf=15 00:16:08.573 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:16:08.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:08.573 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:08.573 issued rwts: total=2560,3030,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:08.573 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:08.573 job3: (groupid=0, jobs=1): err= 0: pid=87525: Sat Nov 16 16:35:45 2024 00:16:08.573 read: IOPS=3632, BW=14.2MiB/s (14.9MB/s)(14.2MiB/1004msec) 00:16:08.573 slat (usec): min=5, max=4221, avg=122.45, stdev=565.08 00:16:08.573 clat (usec): min=1129, max=20247, avg=16126.69, stdev=1974.01 00:16:08.573 lat (usec): min=4598, max=24003, avg=16249.14, stdev=1910.28 00:16:08.573 clat percentiles (usec): 00:16:08.573 | 1.00th=[ 8455], 5.00th=[13173], 10.00th=[14222], 20.00th=[15008], 00:16:08.573 | 30.00th=[15533], 40.00th=[15926], 50.00th=[16319], 60.00th=[16909], 00:16:08.573 | 70.00th=[17171], 80.00th=[17695], 90.00th=[18220], 95.00th=[18482], 00:16:08.573 | 99.00th=[19268], 99.50th=[19268], 99.90th=[20317], 99.95th=[20317], 00:16:08.573 | 99.99th=[20317] 00:16:08.573 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:16:08.573 slat (usec): min=12, max=4504, avg=127.56, stdev=553.35 00:16:08.573 clat (usec): min=8808, max=21437, avg=16530.94, stdev=2029.75 00:16:08.573 lat (usec): min=8841, max=21463, avg=16658.50, stdev=2008.58 00:16:08.573 clat percentiles (usec): 00:16:08.573 | 1.00th=[12387], 5.00th=[13042], 10.00th=[13566], 20.00th=[14877], 00:16:08.573 | 30.00th=[15533], 40.00th=[16057], 50.00th=[16581], 60.00th=[17171], 00:16:08.573 | 70.00th=[17433], 80.00th=[18220], 90.00th=[19268], 95.00th=[19792], 00:16:08.573 | 99.00th=[20841], 99.50th=[21103], 99.90th=[21365], 99.95th=[21365], 00:16:08.573 | 99.99th=[21365] 00:16:08.573 bw ( KiB/s): min=16112, max=16168, per=27.25%, avg=16140.00, stdev=39.60, samples=2 00:16:08.573 iops : min= 4028, max= 4042, avg=4035.00, stdev= 9.90, samples=2 00:16:08.573 lat (msec) : 2=0.01%, 10=0.83%, 20=97.29%, 50=1.87% 00:16:08.573 cpu : usr=4.19%, sys=11.67%, ctx=535, majf=0, minf=19 00:16:08.573 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:08.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:08.573 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:08.573 issued rwts: total=3647,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:08.573 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:08.573 00:16:08.573 Run status group 0 (all jobs): 00:16:08.573 READ: bw=52.7MiB/s (55.2MB/s), 9.96MiB/s-17.9MiB/s (10.4MB/s-18.8MB/s), io=53.0MiB (55.5MB), run=1003-1005msec 00:16:08.573 WRITE: bw=57.8MiB/s (60.6MB/s), 11.8MiB/s-18.2MiB/s (12.4MB/s-19.1MB/s), io=58.1MiB (60.9MB), run=1003-1005msec 00:16:08.573 00:16:08.573 Disk stats (read/write): 00:16:08.573 nvme0n1: ios=2610/2775, merge=0/0, ticks=23885/26222, in_queue=50107, util=87.88% 00:16:08.573 nvme0n2: ios=3807/4096, merge=0/0, ticks=15653/16316, in_queue=31969, util=88.46% 00:16:08.573 nvme0n3: ios=2432/2560, merge=0/0, ticks=13379/11508, in_queue=24887, util=88.52% 00:16:08.573 nvme0n4: ios=3072/3485, merge=0/0, ticks=11809/12823, in_queue=24632, util=89.77% 00:16:08.573 16:35:45 -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:16:08.573 [global] 00:16:08.573 thread=1 00:16:08.573 invalidate=1 00:16:08.573 rw=randwrite 00:16:08.573 time_based=1 00:16:08.573 runtime=1 00:16:08.573 ioengine=libaio 00:16:08.573 direct=1 00:16:08.573 bs=4096 00:16:08.573 iodepth=128 00:16:08.573 norandommap=0 00:16:08.573 numjobs=1 00:16:08.573 00:16:08.573 verify_dump=1 00:16:08.573 verify_backlog=512 00:16:08.573 verify_state_save=0 00:16:08.573 do_verify=1 00:16:08.573 verify=crc32c-intel 00:16:08.573 [job0] 00:16:08.573 filename=/dev/nvme0n1 00:16:08.573 [job1] 00:16:08.573 filename=/dev/nvme0n2 00:16:08.573 [job2] 00:16:08.573 filename=/dev/nvme0n3 00:16:08.573 [job3] 00:16:08.573 filename=/dev/nvme0n4 00:16:08.573 Could not set queue depth (nvme0n1) 00:16:08.573 Could not set queue depth (nvme0n2) 00:16:08.573 Could not set queue depth (nvme0n3) 00:16:08.573 Could not set queue depth (nvme0n4) 00:16:08.573 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:08.573 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:08.573 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:08.573 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:08.573 fio-3.35 00:16:08.573 Starting 4 threads 00:16:09.950 00:16:09.951 job0: (groupid=0, jobs=1): err= 0: pid=87578: Sat Nov 16 16:35:47 2024 00:16:09.951 read: IOPS=3044, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1009msec) 00:16:09.951 slat (usec): min=3, max=16320, avg=150.47, stdev=859.71 00:16:09.951 clat (usec): min=8411, max=37109, avg=19686.01, stdev=3166.36 00:16:09.951 lat (usec): min=8422, max=37121, avg=19836.48, stdev=3242.75 00:16:09.951 clat percentiles (usec): 00:16:09.951 | 1.00th=[13829], 5.00th=[15401], 10.00th=[17433], 20.00th=[17957], 00:16:09.951 | 30.00th=[18220], 40.00th=[18744], 50.00th=[18744], 60.00th=[19530], 00:16:09.951 | 70.00th=[20317], 80.00th=[21365], 90.00th=[23200], 95.00th=[24511], 00:16:09.951 | 99.00th=[32113], 99.50th=[34341], 99.90th=[36963], 99.95th=[36963], 00:16:09.951 | 99.99th=[36963] 00:16:09.951 write: IOPS=3486, BW=13.6MiB/s (14.3MB/s)(13.7MiB/1009msec); 0 zone resets 00:16:09.951 slat (usec): min=5, max=18228, avg=146.63, stdev=992.25 00:16:09.951 clat (usec): min=5540, max=39205, avg=19146.03, stdev=3222.95 00:16:09.951 lat (usec): min=6107, max=39250, avg=19292.66, stdev=3338.93 00:16:09.951 clat percentiles (usec): 00:16:09.951 | 1.00th=[ 8160], 5.00th=[13698], 10.00th=[16712], 20.00th=[17695], 00:16:09.951 | 30.00th=[18482], 40.00th=[18744], 50.00th=[19268], 60.00th=[19530], 00:16:09.951 | 70.00th=[20055], 80.00th=[20841], 90.00th=[22152], 95.00th=[22938], 00:16:09.951 | 99.00th=[30278], 99.50th=[30278], 99.90th=[36963], 99.95th=[36963], 00:16:09.951 | 99.99th=[39060] 00:16:09.951 bw ( KiB/s): min=13122, max=14024, per=19.20%, avg=13573.00, stdev=637.81, samples=2 00:16:09.951 iops : min= 3280, max= 3506, avg=3393.00, stdev=159.81, samples=2 00:16:09.951 lat (msec) : 10=1.43%, 20=65.34%, 50=33.23% 00:16:09.951 cpu : usr=2.58%, sys=8.63%, ctx=476, majf=0, minf=11 00:16:09.951 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:16:09.951 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:09.951 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:09.951 issued rwts: total=3072,3518,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:09.951 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:09.951 job1: (groupid=0, jobs=1): err= 0: pid=87579: Sat Nov 16 16:35:47 2024 00:16:09.951 read: IOPS=3616, BW=14.1MiB/s (14.8MB/s)(14.3MiB/1010msec) 00:16:09.951 slat (usec): min=3, max=11754, avg=130.84, stdev=739.31 00:16:09.951 clat (usec): min=2613, max=27997, avg=16501.35, stdev=3678.82 00:16:09.951 lat (usec): min=5059, max=28027, avg=16632.19, stdev=3728.82 00:16:09.951 clat percentiles (usec): 00:16:09.951 | 1.00th=[ 8717], 5.00th=[10290], 10.00th=[11731], 20.00th=[12518], 00:16:09.951 | 30.00th=[14615], 40.00th=[15926], 50.00th=[17695], 60.00th=[18220], 00:16:09.951 | 70.00th=[18744], 80.00th=[19268], 90.00th=[20055], 95.00th=[22152], 00:16:09.951 | 99.00th=[24773], 99.50th=[24773], 99.90th=[26608], 99.95th=[27395], 00:16:09.951 | 99.99th=[27919] 00:16:09.951 write: IOPS=4055, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1010msec); 0 zone resets 00:16:09.951 slat (usec): min=4, max=12881, avg=122.64, stdev=836.35 00:16:09.951 clat (usec): min=3441, max=27634, avg=16505.82, stdev=3616.26 00:16:09.951 lat (usec): min=3456, max=31537, avg=16628.46, stdev=3696.81 00:16:09.951 clat percentiles (usec): 00:16:09.951 | 1.00th=[ 5800], 5.00th=[11207], 10.00th=[12911], 20.00th=[13960], 00:16:09.951 | 30.00th=[14353], 40.00th=[14877], 50.00th=[16057], 60.00th=[17957], 00:16:09.951 | 70.00th=[18744], 80.00th=[19530], 90.00th=[20579], 95.00th=[22414], 00:16:09.951 | 99.00th=[23987], 99.50th=[25560], 99.90th=[27395], 99.95th=[27395], 00:16:09.951 | 99.99th=[27657] 00:16:09.951 bw ( KiB/s): min=14672, max=17624, per=22.84%, avg=16148.00, stdev=2087.38, samples=2 00:16:09.951 iops : min= 3668, max= 4406, avg=4037.00, stdev=521.84, samples=2 00:16:09.951 lat (msec) : 4=0.17%, 10=3.01%, 20=83.17%, 50=13.65% 00:16:09.951 cpu : usr=4.16%, sys=8.13%, ctx=587, majf=0, minf=15 00:16:09.951 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:09.951 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:09.951 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:09.951 issued rwts: total=3653,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:09.951 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:09.951 job2: (groupid=0, jobs=1): err= 0: pid=87580: Sat Nov 16 16:35:47 2024 00:16:09.951 read: IOPS=4607, BW=18.0MiB/s (18.9MB/s)(18.1MiB/1007msec) 00:16:09.951 slat (usec): min=4, max=14546, avg=100.00, stdev=702.04 00:16:09.951 clat (usec): min=5347, max=31743, avg=13321.95, stdev=4022.73 00:16:09.951 lat (usec): min=5361, max=31760, avg=13421.95, stdev=4066.00 00:16:09.951 clat percentiles (usec): 00:16:09.951 | 1.00th=[ 6587], 5.00th=[ 8717], 10.00th=[ 9372], 20.00th=[10290], 00:16:09.951 | 30.00th=[10945], 40.00th=[11338], 50.00th=[12256], 60.00th=[13173], 00:16:09.951 | 70.00th=[14353], 80.00th=[16909], 90.00th=[19006], 95.00th=[20317], 00:16:09.951 | 99.00th=[26870], 99.50th=[29230], 99.90th=[31589], 99.95th=[31589], 00:16:09.951 | 99.99th=[31851] 00:16:09.951 write: IOPS=5084, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1007msec); 0 zone resets 00:16:09.951 slat (usec): min=5, max=15833, avg=97.13, stdev=707.70 00:16:09.951 clat (usec): min=3550, max=34632, avg=12840.95, stdev=3939.82 00:16:09.951 lat (usec): min=3579, max=34689, avg=12938.08, stdev=4020.00 00:16:09.951 clat percentiles (usec): 00:16:09.951 | 1.00th=[ 5014], 5.00th=[ 6915], 10.00th=[ 8225], 20.00th=[10159], 00:16:09.951 | 30.00th=[10945], 40.00th=[11731], 50.00th=[12125], 60.00th=[12387], 00:16:09.951 | 70.00th=[12780], 80.00th=[17433], 90.00th=[19268], 95.00th=[19792], 00:16:09.951 | 99.00th=[20317], 99.50th=[20841], 99.90th=[32900], 99.95th=[34341], 00:16:09.951 | 99.99th=[34866] 00:16:09.951 bw ( KiB/s): min=16464, max=23768, per=28.45%, avg=20116.00, stdev=5164.71, samples=2 00:16:09.951 iops : min= 4116, max= 5942, avg=5029.00, stdev=1291.18, samples=2 00:16:09.951 lat (msec) : 4=0.04%, 10=17.59%, 20=78.48%, 50=3.88% 00:16:09.951 cpu : usr=3.88%, sys=13.52%, ctx=472, majf=0, minf=12 00:16:09.951 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:16:09.951 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:09.951 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:09.951 issued rwts: total=4640,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:09.951 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:09.951 job3: (groupid=0, jobs=1): err= 0: pid=87581: Sat Nov 16 16:35:47 2024 00:16:09.951 read: IOPS=4683, BW=18.3MiB/s (19.2MB/s)(18.4MiB/1007msec) 00:16:09.951 slat (usec): min=4, max=14778, avg=102.22, stdev=719.90 00:16:09.951 clat (usec): min=3453, max=32806, avg=13286.36, stdev=4237.64 00:16:09.951 lat (usec): min=3465, max=32821, avg=13388.58, stdev=4280.41 00:16:09.951 clat percentiles (usec): 00:16:09.952 | 1.00th=[ 6783], 5.00th=[ 9110], 10.00th=[ 9634], 20.00th=[10290], 00:16:09.952 | 30.00th=[10814], 40.00th=[11207], 50.00th=[11731], 60.00th=[12780], 00:16:09.952 | 70.00th=[14091], 80.00th=[16909], 90.00th=[18482], 95.00th=[22152], 00:16:09.952 | 99.00th=[29230], 99.50th=[31065], 99.90th=[32637], 99.95th=[32900], 00:16:09.952 | 99.99th=[32900] 00:16:09.952 write: IOPS=5084, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1007msec); 0 zone resets 00:16:09.952 slat (usec): min=5, max=15925, avg=94.25, stdev=654.86 00:16:09.952 clat (usec): min=2941, max=34909, avg=12676.12, stdev=3983.21 00:16:09.952 lat (usec): min=2962, max=34964, avg=12770.36, stdev=4053.33 00:16:09.952 clat percentiles (usec): 00:16:09.952 | 1.00th=[ 4490], 5.00th=[ 7111], 10.00th=[ 8717], 20.00th=[10290], 00:16:09.952 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11469], 60.00th=[12125], 00:16:09.952 | 70.00th=[12649], 80.00th=[17433], 90.00th=[19006], 95.00th=[19530], 00:16:09.952 | 99.00th=[20317], 99.50th=[20579], 99.90th=[32637], 99.95th=[32637], 00:16:09.952 | 99.99th=[34866] 00:16:09.952 bw ( KiB/s): min=16416, max=24416, per=28.87%, avg=20416.00, stdev=5656.85, samples=2 00:16:09.952 iops : min= 4104, max= 6104, avg=5104.00, stdev=1414.21, samples=2 00:16:09.952 lat (msec) : 4=0.54%, 10=16.60%, 20=78.36%, 50=4.50% 00:16:09.952 cpu : usr=5.37%, sys=11.63%, ctx=471, majf=0, minf=7 00:16:09.952 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:16:09.952 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:09.952 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:09.952 issued rwts: total=4716,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:09.952 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:09.952 00:16:09.952 Run status group 0 (all jobs): 00:16:09.952 READ: bw=62.2MiB/s (65.2MB/s), 11.9MiB/s-18.3MiB/s (12.5MB/s-19.2MB/s), io=62.8MiB (65.9MB), run=1007-1010msec 00:16:09.952 WRITE: bw=69.1MiB/s (72.4MB/s), 13.6MiB/s-19.9MiB/s (14.3MB/s-20.8MB/s), io=69.7MiB (73.1MB), run=1007-1010msec 00:16:09.952 00:16:09.952 Disk stats (read/write): 00:16:09.952 nvme0n1: ios=2610/3031, merge=0/0, ticks=33321/39422, in_queue=72743, util=86.87% 00:16:09.952 nvme0n2: ios=3178/3584, merge=0/0, ticks=33965/37569, in_queue=71534, util=88.57% 00:16:09.952 nvme0n3: ios=3986/4096, merge=0/0, ticks=50533/51571, in_queue=102104, util=88.87% 00:16:09.952 nvme0n4: ios=3996/4096, merge=0/0, ticks=50958/51548, in_queue=102506, util=89.59% 00:16:09.952 16:35:47 -- target/fio.sh@55 -- # sync 00:16:09.952 16:35:47 -- target/fio.sh@59 -- # fio_pid=87594 00:16:09.952 16:35:47 -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:16:09.952 16:35:47 -- target/fio.sh@61 -- # sleep 3 00:16:09.952 [global] 00:16:09.952 thread=1 00:16:09.952 invalidate=1 00:16:09.952 rw=read 00:16:09.952 time_based=1 00:16:09.952 runtime=10 00:16:09.952 ioengine=libaio 00:16:09.952 direct=1 00:16:09.952 bs=4096 00:16:09.952 iodepth=1 00:16:09.952 norandommap=1 00:16:09.952 numjobs=1 00:16:09.952 00:16:09.952 [job0] 00:16:09.952 filename=/dev/nvme0n1 00:16:09.952 [job1] 00:16:09.952 filename=/dev/nvme0n2 00:16:09.952 [job2] 00:16:09.952 filename=/dev/nvme0n3 00:16:09.952 [job3] 00:16:09.952 filename=/dev/nvme0n4 00:16:09.952 Could not set queue depth (nvme0n1) 00:16:09.952 Could not set queue depth (nvme0n2) 00:16:09.952 Could not set queue depth (nvme0n3) 00:16:09.952 Could not set queue depth (nvme0n4) 00:16:10.211 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:10.211 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:10.211 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:10.211 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:10.211 fio-3.35 00:16:10.211 Starting 4 threads 00:16:13.541 16:35:50 -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:16:13.541 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=45387776, buflen=4096 00:16:13.541 fio: pid=87643, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:16:13.541 16:35:50 -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:16:13.541 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=38584320, buflen=4096 00:16:13.541 fio: pid=87642, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:16:13.541 16:35:50 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:13.541 16:35:50 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:16:13.800 fio: pid=87640, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:16:13.800 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=54108160, buflen=4096 00:16:13.800 16:35:51 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:13.800 16:35:51 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:16:14.059 fio: pid=87641, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:16:14.059 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=60338176, buflen=4096 00:16:14.059 00:16:14.059 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=87640: Sat Nov 16 16:35:51 2024 00:16:14.059 read: IOPS=3877, BW=15.1MiB/s (15.9MB/s)(51.6MiB/3407msec) 00:16:14.059 slat (usec): min=6, max=10340, avg=17.01, stdev=155.38 00:16:14.059 clat (usec): min=76, max=3431, avg=239.52, stdev=83.98 00:16:14.059 lat (usec): min=130, max=10611, avg=256.52, stdev=176.95 00:16:14.059 clat percentiles (usec): 00:16:14.060 | 1.00th=[ 133], 5.00th=[ 145], 10.00th=[ 153], 20.00th=[ 172], 00:16:14.060 | 30.00th=[ 188], 40.00th=[ 206], 50.00th=[ 227], 60.00th=[ 262], 00:16:14.060 | 70.00th=[ 285], 80.00th=[ 310], 90.00th=[ 330], 95.00th=[ 347], 00:16:14.060 | 99.00th=[ 388], 99.50th=[ 408], 99.90th=[ 523], 99.95th=[ 750], 00:16:14.060 | 99.99th=[ 3261] 00:16:14.060 bw ( KiB/s): min=12440, max=20192, per=29.73%, avg=15733.33, stdev=3697.94, samples=6 00:16:14.060 iops : min= 3110, max= 5048, avg=3933.33, stdev=924.48, samples=6 00:16:14.060 lat (usec) : 100=0.01%, 250=56.35%, 500=43.51%, 750=0.08%, 1000=0.02% 00:16:14.060 lat (msec) : 4=0.03% 00:16:14.060 cpu : usr=0.82%, sys=4.76%, ctx=13265, majf=0, minf=1 00:16:14.060 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:14.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:14.060 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:14.060 issued rwts: total=13211,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:14.060 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:14.060 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=87641: Sat Nov 16 16:35:51 2024 00:16:14.060 read: IOPS=4024, BW=15.7MiB/s (16.5MB/s)(57.5MiB/3661msec) 00:16:14.060 slat (usec): min=6, max=9332, avg=18.81, stdev=163.59 00:16:14.060 clat (usec): min=100, max=32271, avg=228.25, stdev=273.68 00:16:14.060 lat (usec): min=133, max=32286, avg=247.06, stdev=318.69 00:16:14.060 clat percentiles (usec): 00:16:14.060 | 1.00th=[ 133], 5.00th=[ 155], 10.00th=[ 184], 20.00th=[ 202], 00:16:14.060 | 30.00th=[ 208], 40.00th=[ 217], 50.00th=[ 223], 60.00th=[ 229], 00:16:14.060 | 70.00th=[ 237], 80.00th=[ 247], 90.00th=[ 265], 95.00th=[ 285], 00:16:14.060 | 99.00th=[ 392], 99.50th=[ 437], 99.90th=[ 701], 99.95th=[ 1614], 00:16:14.060 | 99.99th=[ 4178] 00:16:14.060 bw ( KiB/s): min=14912, max=16794, per=30.31%, avg=16044.86, stdev=708.52, samples=7 00:16:14.060 iops : min= 3728, max= 4198, avg=4011.14, stdev=177.04, samples=7 00:16:14.060 lat (usec) : 250=82.70%, 500=17.00%, 750=0.20%, 1000=0.03% 00:16:14.060 lat (msec) : 2=0.02%, 4=0.03%, 10=0.01%, 50=0.01% 00:16:14.060 cpu : usr=0.85%, sys=5.08%, ctx=14769, majf=0, minf=2 00:16:14.060 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:14.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:14.060 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:14.060 issued rwts: total=14732,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:14.060 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:14.060 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=87642: Sat Nov 16 16:35:51 2024 00:16:14.060 read: IOPS=2962, BW=11.6MiB/s (12.1MB/s)(36.8MiB/3180msec) 00:16:14.060 slat (usec): min=6, max=14683, avg=15.98, stdev=168.18 00:16:14.060 clat (usec): min=139, max=18303, avg=320.17, stdev=197.07 00:16:14.060 lat (usec): min=151, max=18314, avg=336.14, stdev=258.59 00:16:14.060 clat percentiles (usec): 00:16:14.060 | 1.00th=[ 208], 5.00th=[ 251], 10.00th=[ 265], 20.00th=[ 281], 00:16:14.060 | 30.00th=[ 297], 40.00th=[ 306], 50.00th=[ 318], 60.00th=[ 330], 00:16:14.060 | 70.00th=[ 338], 80.00th=[ 351], 90.00th=[ 371], 95.00th=[ 383], 00:16:14.060 | 99.00th=[ 437], 99.50th=[ 457], 99.90th=[ 611], 99.95th=[ 947], 00:16:14.060 | 99.99th=[18220] 00:16:14.060 bw ( KiB/s): min=10952, max=12680, per=22.37%, avg=11842.67, stdev=724.58, samples=6 00:16:14.060 iops : min= 2738, max= 3170, avg=2960.67, stdev=181.15, samples=6 00:16:14.060 lat (usec) : 250=4.83%, 500=94.94%, 750=0.17%, 1000=0.01% 00:16:14.060 lat (msec) : 4=0.03%, 20=0.01% 00:16:14.060 cpu : usr=0.75%, sys=3.46%, ctx=9450, majf=0, minf=2 00:16:14.060 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:14.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:14.060 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:14.060 issued rwts: total=9421,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:14.060 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:14.060 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=87643: Sat Nov 16 16:35:51 2024 00:16:14.060 read: IOPS=3786, BW=14.8MiB/s (15.5MB/s)(43.3MiB/2927msec) 00:16:14.060 slat (nsec): min=9971, max=83069, avg=14818.34, stdev=4840.72 00:16:14.060 clat (usec): min=132, max=18581, avg=247.87, stdev=188.93 00:16:14.060 lat (usec): min=152, max=18595, avg=262.69, stdev=188.60 00:16:14.060 clat percentiles (usec): 00:16:14.060 | 1.00th=[ 153], 5.00th=[ 167], 10.00th=[ 176], 20.00th=[ 188], 00:16:14.060 | 30.00th=[ 200], 40.00th=[ 208], 50.00th=[ 221], 60.00th=[ 235], 00:16:14.060 | 70.00th=[ 285], 80.00th=[ 330], 90.00th=[ 355], 95.00th=[ 367], 00:16:14.060 | 99.00th=[ 400], 99.50th=[ 416], 99.90th=[ 482], 99.95th=[ 523], 00:16:14.060 | 99.99th=[ 2147] 00:16:14.060 bw ( KiB/s): min=11208, max=17880, per=27.73%, avg=14675.20, stdev=3037.20, samples=5 00:16:14.060 iops : min= 2802, max= 4470, avg=3668.80, stdev=759.30, samples=5 00:16:14.060 lat (usec) : 250=65.54%, 500=34.38%, 750=0.05% 00:16:14.060 lat (msec) : 4=0.02%, 20=0.01% 00:16:14.060 cpu : usr=0.92%, sys=4.58%, ctx=11084, majf=0, minf=2 00:16:14.060 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:14.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:14.060 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:14.060 issued rwts: total=11082,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:14.060 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:14.060 00:16:14.060 Run status group 0 (all jobs): 00:16:14.060 READ: bw=51.7MiB/s (54.2MB/s), 11.6MiB/s-15.7MiB/s (12.1MB/s-16.5MB/s), io=189MiB (198MB), run=2927-3661msec 00:16:14.060 00:16:14.060 Disk stats (read/write): 00:16:14.060 nvme0n1: ios=13060/0, merge=0/0, ticks=3136/0, in_queue=3136, util=95.39% 00:16:14.060 nvme0n2: ios=14518/0, merge=0/0, ticks=3367/0, in_queue=3367, util=95.58% 00:16:14.060 nvme0n3: ios=9220/0, merge=0/0, ticks=2948/0, in_queue=2948, util=96.06% 00:16:14.060 nvme0n4: ios=10830/0, merge=0/0, ticks=2733/0, in_queue=2733, util=96.76% 00:16:14.060 16:35:51 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:14.060 16:35:51 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:16:14.319 16:35:51 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:14.319 16:35:51 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:16:14.577 16:35:51 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:14.577 16:35:51 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:16:14.835 16:35:52 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:14.835 16:35:52 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:16:15.094 16:35:52 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:15.094 16:35:52 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:16:15.353 16:35:52 -- target/fio.sh@69 -- # fio_status=0 00:16:15.353 16:35:52 -- target/fio.sh@70 -- # wait 87594 00:16:15.353 16:35:52 -- target/fio.sh@70 -- # fio_status=4 00:16:15.353 16:35:52 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:15.353 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:15.353 16:35:52 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:15.353 16:35:52 -- common/autotest_common.sh@1208 -- # local i=0 00:16:15.353 16:35:52 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:16:15.353 16:35:52 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:15.353 16:35:52 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:15.353 16:35:52 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:16:15.353 16:35:52 -- common/autotest_common.sh@1220 -- # return 0 00:16:15.353 16:35:52 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:16:15.353 nvmf hotplug test: fio failed as expected 00:16:15.353 16:35:52 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:16:15.353 16:35:52 -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:15.612 16:35:52 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:16:15.612 16:35:52 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:16:15.612 16:35:52 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:16:15.612 16:35:52 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:16:15.612 16:35:52 -- target/fio.sh@91 -- # nvmftestfini 00:16:15.612 16:35:52 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:15.612 16:35:52 -- nvmf/common.sh@116 -- # sync 00:16:15.612 16:35:52 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:15.612 16:35:52 -- nvmf/common.sh@119 -- # set +e 00:16:15.612 16:35:52 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:15.612 16:35:52 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:15.612 rmmod nvme_tcp 00:16:15.612 rmmod nvme_fabrics 00:16:15.612 rmmod nvme_keyring 00:16:15.612 16:35:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:15.612 16:35:52 -- nvmf/common.sh@123 -- # set -e 00:16:15.612 16:35:52 -- nvmf/common.sh@124 -- # return 0 00:16:15.612 16:35:52 -- nvmf/common.sh@477 -- # '[' -n 87106 ']' 00:16:15.612 16:35:52 -- nvmf/common.sh@478 -- # killprocess 87106 00:16:15.612 16:35:52 -- common/autotest_common.sh@936 -- # '[' -z 87106 ']' 00:16:15.612 16:35:52 -- common/autotest_common.sh@940 -- # kill -0 87106 00:16:15.612 16:35:52 -- common/autotest_common.sh@941 -- # uname 00:16:15.612 16:35:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:15.612 16:35:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87106 00:16:15.612 16:35:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:15.612 killing process with pid 87106 00:16:15.612 16:35:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:15.612 16:35:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87106' 00:16:15.612 16:35:53 -- common/autotest_common.sh@955 -- # kill 87106 00:16:15.612 16:35:53 -- common/autotest_common.sh@960 -- # wait 87106 00:16:15.870 16:35:53 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:15.870 16:35:53 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:15.871 16:35:53 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:15.871 16:35:53 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:15.871 16:35:53 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:15.871 16:35:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:15.871 16:35:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:15.871 16:35:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:15.871 16:35:53 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:15.871 ************************************ 00:16:15.871 END TEST nvmf_fio_target 00:16:15.871 ************************************ 00:16:15.871 00:16:15.871 real 0m19.363s 00:16:15.871 user 1m13.470s 00:16:15.871 sys 0m8.566s 00:16:15.871 16:35:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:15.871 16:35:53 -- common/autotest_common.sh@10 -- # set +x 00:16:16.130 16:35:53 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:16.130 16:35:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:16.130 16:35:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:16.130 16:35:53 -- common/autotest_common.sh@10 -- # set +x 00:16:16.130 ************************************ 00:16:16.130 START TEST nvmf_bdevio 00:16:16.130 ************************************ 00:16:16.130 16:35:53 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:16.130 * Looking for test storage... 00:16:16.130 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:16.130 16:35:53 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:16.130 16:35:53 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:16.130 16:35:53 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:16.130 16:35:53 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:16.130 16:35:53 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:16.130 16:35:53 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:16.130 16:35:53 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:16.130 16:35:53 -- scripts/common.sh@335 -- # IFS=.-: 00:16:16.130 16:35:53 -- scripts/common.sh@335 -- # read -ra ver1 00:16:16.130 16:35:53 -- scripts/common.sh@336 -- # IFS=.-: 00:16:16.130 16:35:53 -- scripts/common.sh@336 -- # read -ra ver2 00:16:16.130 16:35:53 -- scripts/common.sh@337 -- # local 'op=<' 00:16:16.130 16:35:53 -- scripts/common.sh@339 -- # ver1_l=2 00:16:16.130 16:35:53 -- scripts/common.sh@340 -- # ver2_l=1 00:16:16.130 16:35:53 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:16.130 16:35:53 -- scripts/common.sh@343 -- # case "$op" in 00:16:16.130 16:35:53 -- scripts/common.sh@344 -- # : 1 00:16:16.130 16:35:53 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:16.130 16:35:53 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:16.130 16:35:53 -- scripts/common.sh@364 -- # decimal 1 00:16:16.130 16:35:53 -- scripts/common.sh@352 -- # local d=1 00:16:16.130 16:35:53 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:16.130 16:35:53 -- scripts/common.sh@354 -- # echo 1 00:16:16.130 16:35:53 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:16.130 16:35:53 -- scripts/common.sh@365 -- # decimal 2 00:16:16.130 16:35:53 -- scripts/common.sh@352 -- # local d=2 00:16:16.130 16:35:53 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:16.130 16:35:53 -- scripts/common.sh@354 -- # echo 2 00:16:16.130 16:35:53 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:16.130 16:35:53 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:16.131 16:35:53 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:16.131 16:35:53 -- scripts/common.sh@367 -- # return 0 00:16:16.131 16:35:53 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:16.131 16:35:53 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:16.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:16.131 --rc genhtml_branch_coverage=1 00:16:16.131 --rc genhtml_function_coverage=1 00:16:16.131 --rc genhtml_legend=1 00:16:16.131 --rc geninfo_all_blocks=1 00:16:16.131 --rc geninfo_unexecuted_blocks=1 00:16:16.131 00:16:16.131 ' 00:16:16.131 16:35:53 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:16.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:16.131 --rc genhtml_branch_coverage=1 00:16:16.131 --rc genhtml_function_coverage=1 00:16:16.131 --rc genhtml_legend=1 00:16:16.131 --rc geninfo_all_blocks=1 00:16:16.131 --rc geninfo_unexecuted_blocks=1 00:16:16.131 00:16:16.131 ' 00:16:16.131 16:35:53 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:16.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:16.131 --rc genhtml_branch_coverage=1 00:16:16.131 --rc genhtml_function_coverage=1 00:16:16.131 --rc genhtml_legend=1 00:16:16.131 --rc geninfo_all_blocks=1 00:16:16.131 --rc geninfo_unexecuted_blocks=1 00:16:16.131 00:16:16.131 ' 00:16:16.131 16:35:53 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:16.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:16.131 --rc genhtml_branch_coverage=1 00:16:16.131 --rc genhtml_function_coverage=1 00:16:16.131 --rc genhtml_legend=1 00:16:16.131 --rc geninfo_all_blocks=1 00:16:16.131 --rc geninfo_unexecuted_blocks=1 00:16:16.131 00:16:16.131 ' 00:16:16.131 16:35:53 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:16.131 16:35:53 -- nvmf/common.sh@7 -- # uname -s 00:16:16.131 16:35:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:16.131 16:35:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:16.131 16:35:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:16.131 16:35:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:16.131 16:35:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:16.131 16:35:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:16.131 16:35:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:16.131 16:35:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:16.131 16:35:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:16.131 16:35:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:16.131 16:35:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:16:16.131 16:35:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:16:16.131 16:35:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:16.131 16:35:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:16.131 16:35:53 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:16.131 16:35:53 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:16.131 16:35:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:16.131 16:35:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:16.131 16:35:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:16.131 16:35:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.131 16:35:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.131 16:35:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.131 16:35:53 -- paths/export.sh@5 -- # export PATH 00:16:16.131 16:35:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.131 16:35:53 -- nvmf/common.sh@46 -- # : 0 00:16:16.131 16:35:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:16.131 16:35:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:16.131 16:35:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:16.131 16:35:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:16.131 16:35:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:16.131 16:35:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:16.131 16:35:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:16.131 16:35:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:16.131 16:35:53 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:16.131 16:35:53 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:16.131 16:35:53 -- target/bdevio.sh@14 -- # nvmftestinit 00:16:16.131 16:35:53 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:16.131 16:35:53 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:16.131 16:35:53 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:16.131 16:35:53 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:16.131 16:35:53 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:16.131 16:35:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:16.131 16:35:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:16.131 16:35:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:16.131 16:35:53 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:16.131 16:35:53 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:16.131 16:35:53 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:16.131 16:35:53 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:16.131 16:35:53 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:16.131 16:35:53 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:16.131 16:35:53 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:16.131 16:35:53 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:16.131 16:35:53 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:16.131 16:35:53 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:16.131 16:35:53 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:16.131 16:35:53 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:16.131 16:35:53 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:16.131 16:35:53 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:16.131 16:35:53 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:16.131 16:35:53 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:16.131 16:35:53 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:16.131 16:35:53 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:16.131 16:35:53 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:16.131 16:35:53 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:16.131 Cannot find device "nvmf_tgt_br" 00:16:16.131 16:35:53 -- nvmf/common.sh@154 -- # true 00:16:16.131 16:35:53 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:16.131 Cannot find device "nvmf_tgt_br2" 00:16:16.131 16:35:53 -- nvmf/common.sh@155 -- # true 00:16:16.131 16:35:53 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:16.131 16:35:53 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:16.131 Cannot find device "nvmf_tgt_br" 00:16:16.131 16:35:53 -- nvmf/common.sh@157 -- # true 00:16:16.131 16:35:53 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:16.390 Cannot find device "nvmf_tgt_br2" 00:16:16.390 16:35:53 -- nvmf/common.sh@158 -- # true 00:16:16.390 16:35:53 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:16.390 16:35:53 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:16.390 16:35:53 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:16.390 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:16.390 16:35:53 -- nvmf/common.sh@161 -- # true 00:16:16.390 16:35:53 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:16.390 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:16.390 16:35:53 -- nvmf/common.sh@162 -- # true 00:16:16.390 16:35:53 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:16.390 16:35:53 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:16.390 16:35:53 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:16.390 16:35:53 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:16.390 16:35:53 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:16.390 16:35:53 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:16.390 16:35:53 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:16.390 16:35:53 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:16.390 16:35:53 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:16.390 16:35:53 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:16.390 16:35:53 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:16.390 16:35:53 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:16.390 16:35:53 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:16.390 16:35:53 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:16.390 16:35:53 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:16.390 16:35:53 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:16.390 16:35:53 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:16.390 16:35:53 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:16.390 16:35:53 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:16.390 16:35:53 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:16.390 16:35:53 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:16.390 16:35:53 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:16.390 16:35:53 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:16.390 16:35:53 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:16.390 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:16.390 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:16:16.390 00:16:16.390 --- 10.0.0.2 ping statistics --- 00:16:16.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:16.390 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:16:16.390 16:35:53 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:16.390 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:16.390 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:16:16.390 00:16:16.390 --- 10.0.0.3 ping statistics --- 00:16:16.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:16.390 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:16:16.390 16:35:53 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:16.649 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:16.649 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:16:16.649 00:16:16.649 --- 10.0.0.1 ping statistics --- 00:16:16.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:16.649 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:16:16.649 16:35:53 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:16.649 16:35:53 -- nvmf/common.sh@421 -- # return 0 00:16:16.649 16:35:53 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:16.649 16:35:53 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:16.649 16:35:53 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:16.649 16:35:53 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:16.649 16:35:53 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:16.649 16:35:53 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:16.649 16:35:53 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:16.649 16:35:53 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:16.649 16:35:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:16.649 16:35:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:16.649 16:35:53 -- common/autotest_common.sh@10 -- # set +x 00:16:16.649 16:35:53 -- nvmf/common.sh@469 -- # nvmfpid=87976 00:16:16.649 16:35:53 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:16:16.649 16:35:53 -- nvmf/common.sh@470 -- # waitforlisten 87976 00:16:16.649 16:35:53 -- common/autotest_common.sh@829 -- # '[' -z 87976 ']' 00:16:16.649 16:35:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:16.649 16:35:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:16.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:16.649 16:35:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:16.649 16:35:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:16.649 16:35:53 -- common/autotest_common.sh@10 -- # set +x 00:16:16.649 [2024-11-16 16:35:53.944305] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:16.649 [2024-11-16 16:35:53.944386] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:16.649 [2024-11-16 16:35:54.078926] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:16.907 [2024-11-16 16:35:54.154270] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:16.907 [2024-11-16 16:35:54.154454] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:16.907 [2024-11-16 16:35:54.154469] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:16.907 [2024-11-16 16:35:54.154477] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:16.907 [2024-11-16 16:35:54.154659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:16.907 [2024-11-16 16:35:54.155328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:16.907 [2024-11-16 16:35:54.155474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:16.907 [2024-11-16 16:35:54.155497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:17.840 16:35:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:17.840 16:35:54 -- common/autotest_common.sh@862 -- # return 0 00:16:17.841 16:35:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:17.841 16:35:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:17.841 16:35:54 -- common/autotest_common.sh@10 -- # set +x 00:16:17.841 16:35:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:17.841 16:35:55 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:17.841 16:35:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.841 16:35:55 -- common/autotest_common.sh@10 -- # set +x 00:16:17.841 [2024-11-16 16:35:55.049913] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:17.841 16:35:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.841 16:35:55 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:17.841 16:35:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.841 16:35:55 -- common/autotest_common.sh@10 -- # set +x 00:16:17.841 Malloc0 00:16:17.841 16:35:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.841 16:35:55 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:17.841 16:35:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.841 16:35:55 -- common/autotest_common.sh@10 -- # set +x 00:16:17.841 16:35:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.841 16:35:55 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:17.841 16:35:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.841 16:35:55 -- common/autotest_common.sh@10 -- # set +x 00:16:17.841 16:35:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.841 16:35:55 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:17.841 16:35:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.841 16:35:55 -- common/autotest_common.sh@10 -- # set +x 00:16:17.841 [2024-11-16 16:35:55.142730] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:17.841 16:35:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.841 16:35:55 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:16:17.841 16:35:55 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:17.841 16:35:55 -- nvmf/common.sh@520 -- # config=() 00:16:17.841 16:35:55 -- nvmf/common.sh@520 -- # local subsystem config 00:16:17.841 16:35:55 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:17.841 16:35:55 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:17.841 { 00:16:17.841 "params": { 00:16:17.841 "name": "Nvme$subsystem", 00:16:17.841 "trtype": "$TEST_TRANSPORT", 00:16:17.841 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:17.841 "adrfam": "ipv4", 00:16:17.841 "trsvcid": "$NVMF_PORT", 00:16:17.841 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:17.841 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:17.841 "hdgst": ${hdgst:-false}, 00:16:17.841 "ddgst": ${ddgst:-false} 00:16:17.841 }, 00:16:17.841 "method": "bdev_nvme_attach_controller" 00:16:17.841 } 00:16:17.841 EOF 00:16:17.841 )") 00:16:17.841 16:35:55 -- nvmf/common.sh@542 -- # cat 00:16:17.841 16:35:55 -- nvmf/common.sh@544 -- # jq . 00:16:17.841 16:35:55 -- nvmf/common.sh@545 -- # IFS=, 00:16:17.841 16:35:55 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:17.841 "params": { 00:16:17.841 "name": "Nvme1", 00:16:17.841 "trtype": "tcp", 00:16:17.841 "traddr": "10.0.0.2", 00:16:17.841 "adrfam": "ipv4", 00:16:17.841 "trsvcid": "4420", 00:16:17.841 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:17.841 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:17.841 "hdgst": false, 00:16:17.841 "ddgst": false 00:16:17.841 }, 00:16:17.841 "method": "bdev_nvme_attach_controller" 00:16:17.841 }' 00:16:17.841 [2024-11-16 16:35:55.196347] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:17.841 [2024-11-16 16:35:55.196415] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88032 ] 00:16:18.098 [2024-11-16 16:35:55.334940] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:18.098 [2024-11-16 16:35:55.417623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:18.098 [2024-11-16 16:35:55.417762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:18.098 [2024-11-16 16:35:55.417772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:18.357 [2024-11-16 16:35:55.621020] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:16:18.357 [2024-11-16 16:35:55.621074] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:16:18.357 I/O targets: 00:16:18.357 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:18.357 00:16:18.357 00:16:18.357 CUnit - A unit testing framework for C - Version 2.1-3 00:16:18.357 http://cunit.sourceforge.net/ 00:16:18.357 00:16:18.357 00:16:18.357 Suite: bdevio tests on: Nvme1n1 00:16:18.357 Test: blockdev write read block ...passed 00:16:18.357 Test: blockdev write zeroes read block ...passed 00:16:18.357 Test: blockdev write zeroes read no split ...passed 00:16:18.357 Test: blockdev write zeroes read split ...passed 00:16:18.357 Test: blockdev write zeroes read split partial ...passed 00:16:18.357 Test: blockdev reset ...[2024-11-16 16:35:55.739842] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:18.357 [2024-11-16 16:35:55.739924] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1198ed0 (9): Bad file descriptor 00:16:18.357 passed 00:16:18.357 Test: blockdev write read 8 blocks ...[2024-11-16 16:35:55.750970] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:18.357 passed 00:16:18.357 Test: blockdev write read size > 128k ...passed 00:16:18.357 Test: blockdev write read invalid size ...passed 00:16:18.357 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:18.357 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:18.357 Test: blockdev write read max offset ...passed 00:16:18.615 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:18.615 Test: blockdev writev readv 8 blocks ...passed 00:16:18.615 Test: blockdev writev readv 30 x 1block ...passed 00:16:18.615 Test: blockdev writev readv block ...passed 00:16:18.615 Test: blockdev writev readv size > 128k ...passed 00:16:18.615 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:18.615 Test: blockdev comparev and writev ...[2024-11-16 16:35:55.926499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:18.615 [2024-11-16 16:35:55.926566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.615 [2024-11-16 16:35:55.926584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:18.615 [2024-11-16 16:35:55.926594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.615 [2024-11-16 16:35:55.926890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:18.615 [2024-11-16 16:35:55.926905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:18.615 [2024-11-16 16:35:55.926920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:18.615 [2024-11-16 16:35:55.926928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:18.615 [2024-11-16 16:35:55.927226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:18.615 [2024-11-16 16:35:55.927242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:18.615 [2024-11-16 16:35:55.927256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:18.615 [2024-11-16 16:35:55.927265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:18.615 [2024-11-16 16:35:55.927557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:18.615 [2024-11-16 16:35:55.927571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:18.615 [2024-11-16 16:35:55.927585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:18.615 [2024-11-16 16:35:55.927594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:18.615 passed 00:16:18.615 Test: blockdev nvme passthru rw ...passed 00:16:18.615 Test: blockdev nvme passthru vendor specific ...[2024-11-16 16:35:56.010346] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:18.615 [2024-11-16 16:35:56.010371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:18.615 [2024-11-16 16:35:56.010717] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:18.615 [2024-11-16 16:35:56.010739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:18.615 [2024-11-16 16:35:56.011002] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:18.615 [2024-11-16 16:35:56.011082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:18.615 passed 00:16:18.615 Test: blockdev nvme admin passthru ...[2024-11-16 16:35:56.011384] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:18.615 [2024-11-16 16:35:56.011405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:18.615 passed 00:16:18.615 Test: blockdev copy ...passed 00:16:18.615 00:16:18.615 Run Summary: Type Total Ran Passed Failed Inactive 00:16:18.615 suites 1 1 n/a 0 0 00:16:18.615 tests 23 23 23 0 0 00:16:18.615 asserts 152 152 152 0 n/a 00:16:18.615 00:16:18.615 Elapsed time = 0.892 seconds 00:16:18.873 16:35:56 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:18.873 16:35:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.873 16:35:56 -- common/autotest_common.sh@10 -- # set +x 00:16:18.873 16:35:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.873 16:35:56 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:18.873 16:35:56 -- target/bdevio.sh@30 -- # nvmftestfini 00:16:18.873 16:35:56 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:18.873 16:35:56 -- nvmf/common.sh@116 -- # sync 00:16:19.131 16:35:56 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:19.131 16:35:56 -- nvmf/common.sh@119 -- # set +e 00:16:19.131 16:35:56 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:19.131 16:35:56 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:19.131 rmmod nvme_tcp 00:16:19.131 rmmod nvme_fabrics 00:16:19.131 rmmod nvme_keyring 00:16:19.131 16:35:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:19.131 16:35:56 -- nvmf/common.sh@123 -- # set -e 00:16:19.131 16:35:56 -- nvmf/common.sh@124 -- # return 0 00:16:19.131 16:35:56 -- nvmf/common.sh@477 -- # '[' -n 87976 ']' 00:16:19.131 16:35:56 -- nvmf/common.sh@478 -- # killprocess 87976 00:16:19.131 16:35:56 -- common/autotest_common.sh@936 -- # '[' -z 87976 ']' 00:16:19.131 16:35:56 -- common/autotest_common.sh@940 -- # kill -0 87976 00:16:19.131 16:35:56 -- common/autotest_common.sh@941 -- # uname 00:16:19.131 16:35:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:19.131 16:35:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87976 00:16:19.131 16:35:56 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:16:19.131 16:35:56 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:16:19.131 killing process with pid 87976 00:16:19.131 16:35:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87976' 00:16:19.131 16:35:56 -- common/autotest_common.sh@955 -- # kill 87976 00:16:19.131 16:35:56 -- common/autotest_common.sh@960 -- # wait 87976 00:16:19.389 16:35:56 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:19.389 16:35:56 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:19.389 16:35:56 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:19.389 16:35:56 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:19.389 16:35:56 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:19.389 16:35:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:19.389 16:35:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:19.389 16:35:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:19.389 16:35:56 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:19.389 00:16:19.389 real 0m3.470s 00:16:19.389 user 0m12.657s 00:16:19.389 sys 0m0.874s 00:16:19.389 16:35:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:19.389 ************************************ 00:16:19.389 16:35:56 -- common/autotest_common.sh@10 -- # set +x 00:16:19.389 END TEST nvmf_bdevio 00:16:19.389 ************************************ 00:16:19.647 16:35:56 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:16:19.648 16:35:56 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:19.648 16:35:56 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:16:19.648 16:35:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:19.648 16:35:56 -- common/autotest_common.sh@10 -- # set +x 00:16:19.648 ************************************ 00:16:19.648 START TEST nvmf_bdevio_no_huge 00:16:19.648 ************************************ 00:16:19.648 16:35:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:19.648 * Looking for test storage... 00:16:19.648 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:19.648 16:35:56 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:19.648 16:35:56 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:19.648 16:35:56 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:19.648 16:35:57 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:19.648 16:35:57 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:19.648 16:35:57 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:19.648 16:35:57 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:19.648 16:35:57 -- scripts/common.sh@335 -- # IFS=.-: 00:16:19.648 16:35:57 -- scripts/common.sh@335 -- # read -ra ver1 00:16:19.648 16:35:57 -- scripts/common.sh@336 -- # IFS=.-: 00:16:19.648 16:35:57 -- scripts/common.sh@336 -- # read -ra ver2 00:16:19.648 16:35:57 -- scripts/common.sh@337 -- # local 'op=<' 00:16:19.648 16:35:57 -- scripts/common.sh@339 -- # ver1_l=2 00:16:19.648 16:35:57 -- scripts/common.sh@340 -- # ver2_l=1 00:16:19.648 16:35:57 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:19.648 16:35:57 -- scripts/common.sh@343 -- # case "$op" in 00:16:19.648 16:35:57 -- scripts/common.sh@344 -- # : 1 00:16:19.648 16:35:57 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:19.648 16:35:57 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:19.648 16:35:57 -- scripts/common.sh@364 -- # decimal 1 00:16:19.648 16:35:57 -- scripts/common.sh@352 -- # local d=1 00:16:19.648 16:35:57 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:19.648 16:35:57 -- scripts/common.sh@354 -- # echo 1 00:16:19.648 16:35:57 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:19.648 16:35:57 -- scripts/common.sh@365 -- # decimal 2 00:16:19.648 16:35:57 -- scripts/common.sh@352 -- # local d=2 00:16:19.648 16:35:57 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:19.648 16:35:57 -- scripts/common.sh@354 -- # echo 2 00:16:19.648 16:35:57 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:19.648 16:35:57 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:19.648 16:35:57 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:19.648 16:35:57 -- scripts/common.sh@367 -- # return 0 00:16:19.648 16:35:57 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:19.648 16:35:57 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:19.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:19.648 --rc genhtml_branch_coverage=1 00:16:19.648 --rc genhtml_function_coverage=1 00:16:19.648 --rc genhtml_legend=1 00:16:19.648 --rc geninfo_all_blocks=1 00:16:19.648 --rc geninfo_unexecuted_blocks=1 00:16:19.648 00:16:19.648 ' 00:16:19.648 16:35:57 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:19.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:19.648 --rc genhtml_branch_coverage=1 00:16:19.648 --rc genhtml_function_coverage=1 00:16:19.648 --rc genhtml_legend=1 00:16:19.648 --rc geninfo_all_blocks=1 00:16:19.648 --rc geninfo_unexecuted_blocks=1 00:16:19.648 00:16:19.648 ' 00:16:19.648 16:35:57 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:19.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:19.648 --rc genhtml_branch_coverage=1 00:16:19.648 --rc genhtml_function_coverage=1 00:16:19.648 --rc genhtml_legend=1 00:16:19.648 --rc geninfo_all_blocks=1 00:16:19.648 --rc geninfo_unexecuted_blocks=1 00:16:19.648 00:16:19.648 ' 00:16:19.648 16:35:57 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:19.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:19.648 --rc genhtml_branch_coverage=1 00:16:19.648 --rc genhtml_function_coverage=1 00:16:19.648 --rc genhtml_legend=1 00:16:19.648 --rc geninfo_all_blocks=1 00:16:19.648 --rc geninfo_unexecuted_blocks=1 00:16:19.648 00:16:19.648 ' 00:16:19.648 16:35:57 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:19.648 16:35:57 -- nvmf/common.sh@7 -- # uname -s 00:16:19.648 16:35:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:19.648 16:35:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:19.648 16:35:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:19.648 16:35:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:19.648 16:35:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:19.648 16:35:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:19.648 16:35:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:19.648 16:35:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:19.648 16:35:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:19.648 16:35:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:19.648 16:35:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:16:19.648 16:35:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:16:19.648 16:35:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:19.648 16:35:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:19.648 16:35:57 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:19.648 16:35:57 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:19.648 16:35:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:19.648 16:35:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:19.648 16:35:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:19.648 16:35:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.648 16:35:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.648 16:35:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.648 16:35:57 -- paths/export.sh@5 -- # export PATH 00:16:19.648 16:35:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.648 16:35:57 -- nvmf/common.sh@46 -- # : 0 00:16:19.648 16:35:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:19.648 16:35:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:19.648 16:35:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:19.648 16:35:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:19.648 16:35:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:19.648 16:35:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:19.648 16:35:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:19.648 16:35:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:19.648 16:35:57 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:19.648 16:35:57 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:19.648 16:35:57 -- target/bdevio.sh@14 -- # nvmftestinit 00:16:19.648 16:35:57 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:19.648 16:35:57 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:19.648 16:35:57 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:19.648 16:35:57 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:19.648 16:35:57 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:19.648 16:35:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:19.648 16:35:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:19.648 16:35:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:19.648 16:35:57 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:19.648 16:35:57 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:19.648 16:35:57 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:19.648 16:35:57 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:19.648 16:35:57 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:19.648 16:35:57 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:19.648 16:35:57 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:19.648 16:35:57 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:19.648 16:35:57 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:19.648 16:35:57 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:19.648 16:35:57 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:19.648 16:35:57 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:19.648 16:35:57 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:19.648 16:35:57 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:19.648 16:35:57 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:19.648 16:35:57 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:19.648 16:35:57 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:19.649 16:35:57 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:19.649 16:35:57 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:19.649 16:35:57 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:19.649 Cannot find device "nvmf_tgt_br" 00:16:19.649 16:35:57 -- nvmf/common.sh@154 -- # true 00:16:19.649 16:35:57 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:19.907 Cannot find device "nvmf_tgt_br2" 00:16:19.907 16:35:57 -- nvmf/common.sh@155 -- # true 00:16:19.907 16:35:57 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:19.907 16:35:57 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:19.907 Cannot find device "nvmf_tgt_br" 00:16:19.908 16:35:57 -- nvmf/common.sh@157 -- # true 00:16:19.908 16:35:57 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:19.908 Cannot find device "nvmf_tgt_br2" 00:16:19.908 16:35:57 -- nvmf/common.sh@158 -- # true 00:16:19.908 16:35:57 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:19.908 16:35:57 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:19.908 16:35:57 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:19.908 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:19.908 16:35:57 -- nvmf/common.sh@161 -- # true 00:16:19.908 16:35:57 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:19.908 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:19.908 16:35:57 -- nvmf/common.sh@162 -- # true 00:16:19.908 16:35:57 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:19.908 16:35:57 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:19.908 16:35:57 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:19.908 16:35:57 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:19.908 16:35:57 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:19.908 16:35:57 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:19.908 16:35:57 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:19.908 16:35:57 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:19.908 16:35:57 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:19.908 16:35:57 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:19.908 16:35:57 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:19.908 16:35:57 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:19.908 16:35:57 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:19.908 16:35:57 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:19.908 16:35:57 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:19.908 16:35:57 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:19.908 16:35:57 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:19.908 16:35:57 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:19.908 16:35:57 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:19.908 16:35:57 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:19.908 16:35:57 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:20.166 16:35:57 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:20.166 16:35:57 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:20.166 16:35:57 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:20.166 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:20.166 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:16:20.166 00:16:20.166 --- 10.0.0.2 ping statistics --- 00:16:20.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:20.166 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:16:20.166 16:35:57 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:20.166 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:20.166 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.029 ms 00:16:20.166 00:16:20.166 --- 10.0.0.3 ping statistics --- 00:16:20.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:20.166 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:16:20.166 16:35:57 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:20.166 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:20.166 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:16:20.166 00:16:20.166 --- 10.0.0.1 ping statistics --- 00:16:20.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:20.166 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:16:20.166 16:35:57 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:20.166 16:35:57 -- nvmf/common.sh@421 -- # return 0 00:16:20.166 16:35:57 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:20.166 16:35:57 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:20.167 16:35:57 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:20.167 16:35:57 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:20.167 16:35:57 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:20.167 16:35:57 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:20.167 16:35:57 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:20.167 16:35:57 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:20.167 16:35:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:20.167 16:35:57 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:20.167 16:35:57 -- common/autotest_common.sh@10 -- # set +x 00:16:20.167 16:35:57 -- nvmf/common.sh@469 -- # nvmfpid=88220 00:16:20.167 16:35:57 -- nvmf/common.sh@470 -- # waitforlisten 88220 00:16:20.167 16:35:57 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:16:20.167 16:35:57 -- common/autotest_common.sh@829 -- # '[' -z 88220 ']' 00:16:20.167 16:35:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:20.167 16:35:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:20.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:20.167 16:35:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:20.167 16:35:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:20.167 16:35:57 -- common/autotest_common.sh@10 -- # set +x 00:16:20.167 [2024-11-16 16:35:57.499102] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:20.167 [2024-11-16 16:35:57.499211] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:16:20.167 [2024-11-16 16:35:57.646715] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:20.425 [2024-11-16 16:35:57.743644] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:20.425 [2024-11-16 16:35:57.743802] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:20.425 [2024-11-16 16:35:57.743816] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:20.425 [2024-11-16 16:35:57.743824] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:20.425 [2024-11-16 16:35:57.744366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:20.425 [2024-11-16 16:35:57.744504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:20.425 [2024-11-16 16:35:57.744658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:20.425 [2024-11-16 16:35:57.744711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:21.362 16:35:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:21.362 16:35:58 -- common/autotest_common.sh@862 -- # return 0 00:16:21.362 16:35:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:21.362 16:35:58 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:21.362 16:35:58 -- common/autotest_common.sh@10 -- # set +x 00:16:21.362 16:35:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:21.362 16:35:58 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:21.362 16:35:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.362 16:35:58 -- common/autotest_common.sh@10 -- # set +x 00:16:21.362 [2024-11-16 16:35:58.559173] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:21.362 16:35:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.362 16:35:58 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:21.362 16:35:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.362 16:35:58 -- common/autotest_common.sh@10 -- # set +x 00:16:21.362 Malloc0 00:16:21.362 16:35:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.362 16:35:58 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:21.362 16:35:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.362 16:35:58 -- common/autotest_common.sh@10 -- # set +x 00:16:21.362 16:35:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.362 16:35:58 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:21.362 16:35:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.362 16:35:58 -- common/autotest_common.sh@10 -- # set +x 00:16:21.362 16:35:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.362 16:35:58 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:21.362 16:35:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.362 16:35:58 -- common/autotest_common.sh@10 -- # set +x 00:16:21.362 [2024-11-16 16:35:58.607573] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:21.362 16:35:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.362 16:35:58 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:16:21.362 16:35:58 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:21.362 16:35:58 -- nvmf/common.sh@520 -- # config=() 00:16:21.362 16:35:58 -- nvmf/common.sh@520 -- # local subsystem config 00:16:21.362 16:35:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:21.362 16:35:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:21.362 { 00:16:21.362 "params": { 00:16:21.362 "name": "Nvme$subsystem", 00:16:21.362 "trtype": "$TEST_TRANSPORT", 00:16:21.362 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:21.362 "adrfam": "ipv4", 00:16:21.362 "trsvcid": "$NVMF_PORT", 00:16:21.362 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:21.362 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:21.362 "hdgst": ${hdgst:-false}, 00:16:21.362 "ddgst": ${ddgst:-false} 00:16:21.362 }, 00:16:21.362 "method": "bdev_nvme_attach_controller" 00:16:21.362 } 00:16:21.362 EOF 00:16:21.362 )") 00:16:21.362 16:35:58 -- nvmf/common.sh@542 -- # cat 00:16:21.362 16:35:58 -- nvmf/common.sh@544 -- # jq . 00:16:21.362 16:35:58 -- nvmf/common.sh@545 -- # IFS=, 00:16:21.362 16:35:58 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:21.362 "params": { 00:16:21.362 "name": "Nvme1", 00:16:21.362 "trtype": "tcp", 00:16:21.362 "traddr": "10.0.0.2", 00:16:21.362 "adrfam": "ipv4", 00:16:21.362 "trsvcid": "4420", 00:16:21.362 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:21.362 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:21.362 "hdgst": false, 00:16:21.362 "ddgst": false 00:16:21.362 }, 00:16:21.362 "method": "bdev_nvme_attach_controller" 00:16:21.362 }' 00:16:21.362 [2024-11-16 16:35:58.664381] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:21.362 [2024-11-16 16:35:58.664491] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid88279 ] 00:16:21.362 [2024-11-16 16:35:58.805442] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:21.620 [2024-11-16 16:35:58.951308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:21.620 [2024-11-16 16:35:58.951434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:21.620 [2024-11-16 16:35:58.951446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:21.880 [2024-11-16 16:35:59.162897] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:16:21.880 [2024-11-16 16:35:59.162936] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:16:21.880 I/O targets: 00:16:21.880 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:21.880 00:16:21.880 00:16:21.880 CUnit - A unit testing framework for C - Version 2.1-3 00:16:21.880 http://cunit.sourceforge.net/ 00:16:21.880 00:16:21.880 00:16:21.880 Suite: bdevio tests on: Nvme1n1 00:16:21.880 Test: blockdev write read block ...passed 00:16:21.880 Test: blockdev write zeroes read block ...passed 00:16:21.880 Test: blockdev write zeroes read no split ...passed 00:16:21.880 Test: blockdev write zeroes read split ...passed 00:16:21.880 Test: blockdev write zeroes read split partial ...passed 00:16:21.880 Test: blockdev reset ...[2024-11-16 16:35:59.290841] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:21.880 [2024-11-16 16:35:59.290984] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a92820 (9): Bad file descriptor 00:16:21.880 [2024-11-16 16:35:59.310901] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:21.880 passed 00:16:21.880 Test: blockdev write read 8 blocks ...passed 00:16:21.880 Test: blockdev write read size > 128k ...passed 00:16:21.880 Test: blockdev write read invalid size ...passed 00:16:21.880 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:21.880 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:21.880 Test: blockdev write read max offset ...passed 00:16:22.141 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:22.141 Test: blockdev writev readv 8 blocks ...passed 00:16:22.141 Test: blockdev writev readv 30 x 1block ...passed 00:16:22.141 Test: blockdev writev readv block ...passed 00:16:22.141 Test: blockdev writev readv size > 128k ...passed 00:16:22.141 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:22.141 Test: blockdev comparev and writev ...[2024-11-16 16:35:59.489426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:22.141 [2024-11-16 16:35:59.489479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:22.141 [2024-11-16 16:35:59.489513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:22.141 [2024-11-16 16:35:59.489524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:22.141 [2024-11-16 16:35:59.489943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:22.141 [2024-11-16 16:35:59.489969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:22.141 [2024-11-16 16:35:59.489985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:22.141 [2024-11-16 16:35:59.489995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:22.141 [2024-11-16 16:35:59.490351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:22.141 [2024-11-16 16:35:59.490382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:22.141 [2024-11-16 16:35:59.490510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:22.141 [2024-11-16 16:35:59.490525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:22.141 [2024-11-16 16:35:59.490987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:22.141 [2024-11-16 16:35:59.491031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:22.141 [2024-11-16 16:35:59.491112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:22.141 [2024-11-16 16:35:59.491124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:22.141 passed 00:16:22.141 Test: blockdev nvme passthru rw ...passed 00:16:22.141 Test: blockdev nvme passthru vendor specific ...[2024-11-16 16:35:59.574428] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:22.141 [2024-11-16 16:35:59.574457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:22.141 [2024-11-16 16:35:59.574840] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:22.141 [2024-11-16 16:35:59.574866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:22.141 [2024-11-16 16:35:59.574988] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:22.141 [2024-11-16 16:35:59.575122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:22.141 [2024-11-16 16:35:59.575368] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:22.141 [2024-11-16 16:35:59.575397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:22.141 passed 00:16:22.141 Test: blockdev nvme admin passthru ...passed 00:16:22.399 Test: blockdev copy ...passed 00:16:22.399 00:16:22.399 Run Summary: Type Total Ran Passed Failed Inactive 00:16:22.399 suites 1 1 n/a 0 0 00:16:22.399 tests 23 23 23 0 0 00:16:22.399 asserts 152 152 152 0 n/a 00:16:22.399 00:16:22.399 Elapsed time = 0.943 seconds 00:16:22.657 16:35:59 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:22.657 16:35:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.657 16:35:59 -- common/autotest_common.sh@10 -- # set +x 00:16:22.657 16:35:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.657 16:35:59 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:22.657 16:35:59 -- target/bdevio.sh@30 -- # nvmftestfini 00:16:22.657 16:35:59 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:22.657 16:35:59 -- nvmf/common.sh@116 -- # sync 00:16:22.657 16:36:00 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:22.657 16:36:00 -- nvmf/common.sh@119 -- # set +e 00:16:22.657 16:36:00 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:22.657 16:36:00 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:22.657 rmmod nvme_tcp 00:16:22.657 rmmod nvme_fabrics 00:16:22.657 rmmod nvme_keyring 00:16:22.657 16:36:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:22.657 16:36:00 -- nvmf/common.sh@123 -- # set -e 00:16:22.657 16:36:00 -- nvmf/common.sh@124 -- # return 0 00:16:22.657 16:36:00 -- nvmf/common.sh@477 -- # '[' -n 88220 ']' 00:16:22.657 16:36:00 -- nvmf/common.sh@478 -- # killprocess 88220 00:16:22.657 16:36:00 -- common/autotest_common.sh@936 -- # '[' -z 88220 ']' 00:16:22.657 16:36:00 -- common/autotest_common.sh@940 -- # kill -0 88220 00:16:22.657 16:36:00 -- common/autotest_common.sh@941 -- # uname 00:16:22.657 16:36:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:22.657 16:36:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88220 00:16:22.915 16:36:00 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:16:22.915 16:36:00 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:16:22.915 killing process with pid 88220 00:16:22.915 16:36:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88220' 00:16:22.915 16:36:00 -- common/autotest_common.sh@955 -- # kill 88220 00:16:22.915 16:36:00 -- common/autotest_common.sh@960 -- # wait 88220 00:16:23.174 16:36:00 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:23.174 16:36:00 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:23.174 16:36:00 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:23.174 16:36:00 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:23.174 16:36:00 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:23.174 16:36:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:23.174 16:36:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:23.174 16:36:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:23.174 16:36:00 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:23.174 00:16:23.174 real 0m3.692s 00:16:23.174 user 0m13.283s 00:16:23.174 sys 0m1.380s 00:16:23.174 16:36:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:23.174 ************************************ 00:16:23.174 16:36:00 -- common/autotest_common.sh@10 -- # set +x 00:16:23.174 END TEST nvmf_bdevio_no_huge 00:16:23.174 ************************************ 00:16:23.174 16:36:00 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:23.174 16:36:00 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:23.174 16:36:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:23.174 16:36:00 -- common/autotest_common.sh@10 -- # set +x 00:16:23.174 ************************************ 00:16:23.174 START TEST nvmf_tls 00:16:23.174 ************************************ 00:16:23.174 16:36:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:23.433 * Looking for test storage... 00:16:23.433 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:23.433 16:36:00 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:23.433 16:36:00 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:23.433 16:36:00 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:23.433 16:36:00 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:23.433 16:36:00 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:23.433 16:36:00 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:23.433 16:36:00 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:23.433 16:36:00 -- scripts/common.sh@335 -- # IFS=.-: 00:16:23.433 16:36:00 -- scripts/common.sh@335 -- # read -ra ver1 00:16:23.433 16:36:00 -- scripts/common.sh@336 -- # IFS=.-: 00:16:23.433 16:36:00 -- scripts/common.sh@336 -- # read -ra ver2 00:16:23.433 16:36:00 -- scripts/common.sh@337 -- # local 'op=<' 00:16:23.433 16:36:00 -- scripts/common.sh@339 -- # ver1_l=2 00:16:23.433 16:36:00 -- scripts/common.sh@340 -- # ver2_l=1 00:16:23.433 16:36:00 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:23.433 16:36:00 -- scripts/common.sh@343 -- # case "$op" in 00:16:23.433 16:36:00 -- scripts/common.sh@344 -- # : 1 00:16:23.433 16:36:00 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:23.433 16:36:00 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:23.433 16:36:00 -- scripts/common.sh@364 -- # decimal 1 00:16:23.433 16:36:00 -- scripts/common.sh@352 -- # local d=1 00:16:23.433 16:36:00 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:23.433 16:36:00 -- scripts/common.sh@354 -- # echo 1 00:16:23.433 16:36:00 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:23.433 16:36:00 -- scripts/common.sh@365 -- # decimal 2 00:16:23.433 16:36:00 -- scripts/common.sh@352 -- # local d=2 00:16:23.433 16:36:00 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:23.433 16:36:00 -- scripts/common.sh@354 -- # echo 2 00:16:23.433 16:36:00 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:23.433 16:36:00 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:23.433 16:36:00 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:23.433 16:36:00 -- scripts/common.sh@367 -- # return 0 00:16:23.433 16:36:00 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:23.433 16:36:00 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:23.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.433 --rc genhtml_branch_coverage=1 00:16:23.433 --rc genhtml_function_coverage=1 00:16:23.433 --rc genhtml_legend=1 00:16:23.433 --rc geninfo_all_blocks=1 00:16:23.433 --rc geninfo_unexecuted_blocks=1 00:16:23.433 00:16:23.433 ' 00:16:23.433 16:36:00 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:23.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.433 --rc genhtml_branch_coverage=1 00:16:23.434 --rc genhtml_function_coverage=1 00:16:23.434 --rc genhtml_legend=1 00:16:23.434 --rc geninfo_all_blocks=1 00:16:23.434 --rc geninfo_unexecuted_blocks=1 00:16:23.434 00:16:23.434 ' 00:16:23.434 16:36:00 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:23.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.434 --rc genhtml_branch_coverage=1 00:16:23.434 --rc genhtml_function_coverage=1 00:16:23.434 --rc genhtml_legend=1 00:16:23.434 --rc geninfo_all_blocks=1 00:16:23.434 --rc geninfo_unexecuted_blocks=1 00:16:23.434 00:16:23.434 ' 00:16:23.434 16:36:00 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:23.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.434 --rc genhtml_branch_coverage=1 00:16:23.434 --rc genhtml_function_coverage=1 00:16:23.434 --rc genhtml_legend=1 00:16:23.434 --rc geninfo_all_blocks=1 00:16:23.434 --rc geninfo_unexecuted_blocks=1 00:16:23.434 00:16:23.434 ' 00:16:23.434 16:36:00 -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:23.434 16:36:00 -- nvmf/common.sh@7 -- # uname -s 00:16:23.434 16:36:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:23.434 16:36:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:23.434 16:36:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:23.434 16:36:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:23.434 16:36:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:23.434 16:36:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:23.434 16:36:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:23.434 16:36:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:23.434 16:36:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:23.434 16:36:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:23.434 16:36:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:16:23.434 16:36:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:16:23.434 16:36:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:23.434 16:36:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:23.434 16:36:00 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:23.434 16:36:00 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:23.434 16:36:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:23.434 16:36:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:23.434 16:36:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:23.434 16:36:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.434 16:36:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.434 16:36:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.434 16:36:00 -- paths/export.sh@5 -- # export PATH 00:16:23.434 16:36:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.434 16:36:00 -- nvmf/common.sh@46 -- # : 0 00:16:23.434 16:36:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:23.434 16:36:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:23.434 16:36:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:23.434 16:36:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:23.434 16:36:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:23.434 16:36:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:23.434 16:36:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:23.434 16:36:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:23.434 16:36:00 -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:23.434 16:36:00 -- target/tls.sh@71 -- # nvmftestinit 00:16:23.434 16:36:00 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:23.434 16:36:00 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:23.434 16:36:00 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:23.434 16:36:00 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:23.434 16:36:00 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:23.434 16:36:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:23.434 16:36:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:23.434 16:36:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:23.434 16:36:00 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:23.434 16:36:00 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:23.434 16:36:00 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:23.434 16:36:00 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:23.434 16:36:00 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:23.434 16:36:00 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:23.434 16:36:00 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:23.434 16:36:00 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:23.434 16:36:00 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:23.434 16:36:00 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:23.434 16:36:00 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:23.434 16:36:00 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:23.434 16:36:00 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:23.434 16:36:00 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:23.434 16:36:00 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:23.434 16:36:00 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:23.434 16:36:00 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:23.434 16:36:00 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:23.434 16:36:00 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:23.434 16:36:00 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:23.434 Cannot find device "nvmf_tgt_br" 00:16:23.434 16:36:00 -- nvmf/common.sh@154 -- # true 00:16:23.434 16:36:00 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:23.434 Cannot find device "nvmf_tgt_br2" 00:16:23.434 16:36:00 -- nvmf/common.sh@155 -- # true 00:16:23.434 16:36:00 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:23.434 16:36:00 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:23.434 Cannot find device "nvmf_tgt_br" 00:16:23.434 16:36:00 -- nvmf/common.sh@157 -- # true 00:16:23.434 16:36:00 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:23.434 Cannot find device "nvmf_tgt_br2" 00:16:23.434 16:36:00 -- nvmf/common.sh@158 -- # true 00:16:23.434 16:36:00 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:23.693 16:36:00 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:23.693 16:36:00 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:23.693 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:23.693 16:36:00 -- nvmf/common.sh@161 -- # true 00:16:23.693 16:36:00 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:23.693 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:23.693 16:36:00 -- nvmf/common.sh@162 -- # true 00:16:23.693 16:36:00 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:23.693 16:36:00 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:23.693 16:36:00 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:23.693 16:36:00 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:23.693 16:36:01 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:23.693 16:36:01 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:23.693 16:36:01 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:23.693 16:36:01 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:23.693 16:36:01 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:23.693 16:36:01 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:23.693 16:36:01 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:23.693 16:36:01 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:23.693 16:36:01 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:23.693 16:36:01 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:23.693 16:36:01 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:23.693 16:36:01 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:23.693 16:36:01 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:23.693 16:36:01 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:23.693 16:36:01 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:23.693 16:36:01 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:23.693 16:36:01 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:23.693 16:36:01 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:23.952 16:36:01 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:23.952 16:36:01 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:23.952 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:23.952 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:16:23.952 00:16:23.952 --- 10.0.0.2 ping statistics --- 00:16:23.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.952 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:16:23.952 16:36:01 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:23.952 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:23.952 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:16:23.952 00:16:23.952 --- 10.0.0.3 ping statistics --- 00:16:23.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.952 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:16:23.952 16:36:01 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:23.952 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:23.952 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:16:23.952 00:16:23.952 --- 10.0.0.1 ping statistics --- 00:16:23.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.952 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:16:23.952 16:36:01 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:23.952 16:36:01 -- nvmf/common.sh@421 -- # return 0 00:16:23.952 16:36:01 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:23.952 16:36:01 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:23.952 16:36:01 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:23.952 16:36:01 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:23.952 16:36:01 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:23.952 16:36:01 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:23.952 16:36:01 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:23.952 16:36:01 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:16:23.952 16:36:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:23.952 16:36:01 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:23.952 16:36:01 -- common/autotest_common.sh@10 -- # set +x 00:16:23.952 16:36:01 -- nvmf/common.sh@469 -- # nvmfpid=88469 00:16:23.952 16:36:01 -- nvmf/common.sh@470 -- # waitforlisten 88469 00:16:23.952 16:36:01 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:16:23.952 16:36:01 -- common/autotest_common.sh@829 -- # '[' -z 88469 ']' 00:16:23.952 16:36:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:23.952 16:36:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:23.952 16:36:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:23.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:23.952 16:36:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:23.952 16:36:01 -- common/autotest_common.sh@10 -- # set +x 00:16:23.952 [2024-11-16 16:36:01.284779] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:23.952 [2024-11-16 16:36:01.284869] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:23.952 [2024-11-16 16:36:01.424990] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:24.211 [2024-11-16 16:36:01.508874] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:24.211 [2024-11-16 16:36:01.509096] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:24.211 [2024-11-16 16:36:01.509124] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:24.211 [2024-11-16 16:36:01.509138] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:24.211 [2024-11-16 16:36:01.509180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:24.211 16:36:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:24.211 16:36:01 -- common/autotest_common.sh@862 -- # return 0 00:16:24.211 16:36:01 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:24.211 16:36:01 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:24.211 16:36:01 -- common/autotest_common.sh@10 -- # set +x 00:16:24.211 16:36:01 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:24.211 16:36:01 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:16:24.211 16:36:01 -- target/tls.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:16:24.470 true 00:16:24.470 16:36:01 -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:24.470 16:36:01 -- target/tls.sh@82 -- # jq -r .tls_version 00:16:24.728 16:36:02 -- target/tls.sh@82 -- # version=0 00:16:24.728 16:36:02 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:16:24.728 16:36:02 -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:24.987 16:36:02 -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:24.987 16:36:02 -- target/tls.sh@90 -- # jq -r .tls_version 00:16:25.246 16:36:02 -- target/tls.sh@90 -- # version=13 00:16:25.246 16:36:02 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:16:25.246 16:36:02 -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:16:25.504 16:36:02 -- target/tls.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:25.504 16:36:02 -- target/tls.sh@98 -- # jq -r .tls_version 00:16:25.763 16:36:03 -- target/tls.sh@98 -- # version=7 00:16:25.763 16:36:03 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:16:25.763 16:36:03 -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:25.763 16:36:03 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:16:26.021 16:36:03 -- target/tls.sh@105 -- # ktls=false 00:16:26.021 16:36:03 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:16:26.021 16:36:03 -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:16:26.280 16:36:03 -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:26.280 16:36:03 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:16:26.538 16:36:03 -- target/tls.sh@113 -- # ktls=true 00:16:26.538 16:36:03 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:16:26.538 16:36:03 -- target/tls.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:16:26.797 16:36:04 -- target/tls.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:26.797 16:36:04 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:16:27.056 16:36:04 -- target/tls.sh@121 -- # ktls=false 00:16:27.056 16:36:04 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:16:27.056 16:36:04 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:16:27.056 16:36:04 -- target/tls.sh@49 -- # local key hash crc 00:16:27.056 16:36:04 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:16:27.056 16:36:04 -- target/tls.sh@51 -- # hash=01 00:16:27.056 16:36:04 -- target/tls.sh@52 -- # gzip -1 -c 00:16:27.056 16:36:04 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:16:27.056 16:36:04 -- target/tls.sh@52 -- # tail -c8 00:16:27.056 16:36:04 -- target/tls.sh@52 -- # head -c 4 00:16:27.056 16:36:04 -- target/tls.sh@52 -- # crc='p$H�' 00:16:27.056 16:36:04 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:16:27.056 16:36:04 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:16:27.056 16:36:04 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:27.056 16:36:04 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:27.056 16:36:04 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:16:27.056 16:36:04 -- target/tls.sh@49 -- # local key hash crc 00:16:27.056 16:36:04 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:16:27.056 16:36:04 -- target/tls.sh@51 -- # hash=01 00:16:27.056 16:36:04 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:16:27.056 16:36:04 -- target/tls.sh@52 -- # tail -c8 00:16:27.056 16:36:04 -- target/tls.sh@52 -- # gzip -1 -c 00:16:27.056 16:36:04 -- target/tls.sh@52 -- # head -c 4 00:16:27.056 16:36:04 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:16:27.056 16:36:04 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:16:27.056 16:36:04 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:16:27.056 16:36:04 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:27.056 16:36:04 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:27.056 16:36:04 -- target/tls.sh@130 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:27.056 16:36:04 -- target/tls.sh@131 -- # key_2_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:27.056 16:36:04 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:27.056 16:36:04 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:27.056 16:36:04 -- target/tls.sh@136 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:27.056 16:36:04 -- target/tls.sh@137 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:27.056 16:36:04 -- target/tls.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:27.315 16:36:04 -- target/tls.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:16:27.882 16:36:05 -- target/tls.sh@142 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:27.882 16:36:05 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:27.882 16:36:05 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:27.882 [2024-11-16 16:36:05.339474] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:27.882 16:36:05 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:28.141 16:36:05 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:28.400 [2024-11-16 16:36:05.815529] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:28.400 [2024-11-16 16:36:05.815784] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:28.400 16:36:05 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:28.659 malloc0 00:16:28.659 16:36:06 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:28.917 16:36:06 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:29.175 16:36:06 -- target/tls.sh@146 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:39.150 Initializing NVMe Controllers 00:16:39.150 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:39.150 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:39.150 Initialization complete. Launching workers. 00:16:39.150 ======================================================== 00:16:39.150 Latency(us) 00:16:39.150 Device Information : IOPS MiB/s Average min max 00:16:39.150 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11859.08 46.32 5397.49 1531.56 14349.23 00:16:39.150 ======================================================== 00:16:39.150 Total : 11859.08 46.32 5397.49 1531.56 14349.23 00:16:39.150 00:16:39.150 16:36:16 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:39.150 16:36:16 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:39.150 16:36:16 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:39.150 16:36:16 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:39.150 16:36:16 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:16:39.150 16:36:16 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:39.151 16:36:16 -- target/tls.sh@28 -- # bdevperf_pid=88825 00:16:39.151 16:36:16 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:39.151 16:36:16 -- target/tls.sh@31 -- # waitforlisten 88825 /var/tmp/bdevperf.sock 00:16:39.151 16:36:16 -- common/autotest_common.sh@829 -- # '[' -z 88825 ']' 00:16:39.151 16:36:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:39.151 16:36:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:39.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:39.151 16:36:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:39.151 16:36:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:39.151 16:36:16 -- common/autotest_common.sh@10 -- # set +x 00:16:39.151 16:36:16 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:39.410 [2024-11-16 16:36:16.668020] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:39.410 [2024-11-16 16:36:16.668144] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88825 ] 00:16:39.410 [2024-11-16 16:36:16.810386] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:39.410 [2024-11-16 16:36:16.870852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:40.345 16:36:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:40.345 16:36:17 -- common/autotest_common.sh@862 -- # return 0 00:16:40.345 16:36:17 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:40.345 [2024-11-16 16:36:17.827109] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:40.603 TLSTESTn1 00:16:40.603 16:36:17 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:40.603 Running I/O for 10 seconds... 00:16:50.645 00:16:50.645 Latency(us) 00:16:50.645 [2024-11-16T16:36:28.136Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:50.645 [2024-11-16T16:36:28.136Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:50.645 Verification LBA range: start 0x0 length 0x2000 00:16:50.645 TLSTESTn1 : 10.02 5252.43 20.52 0.00 0.00 24326.19 6315.29 22997.18 00:16:50.645 [2024-11-16T16:36:28.136Z] =================================================================================================================== 00:16:50.645 [2024-11-16T16:36:28.136Z] Total : 5252.43 20.52 0.00 0.00 24326.19 6315.29 22997.18 00:16:50.645 0 00:16:50.645 16:36:28 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:50.645 16:36:28 -- target/tls.sh@45 -- # killprocess 88825 00:16:50.645 16:36:28 -- common/autotest_common.sh@936 -- # '[' -z 88825 ']' 00:16:50.645 16:36:28 -- common/autotest_common.sh@940 -- # kill -0 88825 00:16:50.645 16:36:28 -- common/autotest_common.sh@941 -- # uname 00:16:50.645 16:36:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:50.645 16:36:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88825 00:16:50.645 killing process with pid 88825 00:16:50.645 Received shutdown signal, test time was about 10.000000 seconds 00:16:50.645 00:16:50.645 Latency(us) 00:16:50.645 [2024-11-16T16:36:28.136Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:50.645 [2024-11-16T16:36:28.136Z] =================================================================================================================== 00:16:50.645 [2024-11-16T16:36:28.136Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:50.645 16:36:28 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:16:50.645 16:36:28 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:16:50.645 16:36:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88825' 00:16:50.645 16:36:28 -- common/autotest_common.sh@955 -- # kill 88825 00:16:50.645 16:36:28 -- common/autotest_common.sh@960 -- # wait 88825 00:16:50.904 16:36:28 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:50.904 16:36:28 -- common/autotest_common.sh@650 -- # local es=0 00:16:50.904 16:36:28 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:50.904 16:36:28 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:16:50.904 16:36:28 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:50.904 16:36:28 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:16:50.904 16:36:28 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:50.904 16:36:28 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:50.904 16:36:28 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:50.904 16:36:28 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:50.904 16:36:28 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:50.904 16:36:28 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt' 00:16:50.904 16:36:28 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:50.904 16:36:28 -- target/tls.sh@28 -- # bdevperf_pid=88978 00:16:50.904 16:36:28 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:50.904 16:36:28 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:50.904 16:36:28 -- target/tls.sh@31 -- # waitforlisten 88978 /var/tmp/bdevperf.sock 00:16:50.904 16:36:28 -- common/autotest_common.sh@829 -- # '[' -z 88978 ']' 00:16:50.904 16:36:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:50.904 16:36:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:50.904 16:36:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:50.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:50.904 16:36:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:50.904 16:36:28 -- common/autotest_common.sh@10 -- # set +x 00:16:50.904 [2024-11-16 16:36:28.326123] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:50.904 [2024-11-16 16:36:28.326363] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88978 ] 00:16:51.163 [2024-11-16 16:36:28.465195] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:51.163 [2024-11-16 16:36:28.517305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:52.099 16:36:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:52.099 16:36:29 -- common/autotest_common.sh@862 -- # return 0 00:16:52.099 16:36:29 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:52.099 [2024-11-16 16:36:29.551016] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:52.099 [2024-11-16 16:36:29.556133] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:52.099 [2024-11-16 16:36:29.556589] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x191ecc0 (107): Transport endpoint is not connected 00:16:52.099 [2024-11-16 16:36:29.557573] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x191ecc0 (9): Bad file descriptor 00:16:52.099 [2024-11-16 16:36:29.558569] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:52.099 [2024-11-16 16:36:29.558592] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:52.099 [2024-11-16 16:36:29.558601] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:52.099 2024/11/16 16:36:29 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:16:52.099 request: 00:16:52.099 { 00:16:52.099 "method": "bdev_nvme_attach_controller", 00:16:52.099 "params": { 00:16:52.099 "name": "TLSTEST", 00:16:52.099 "trtype": "tcp", 00:16:52.099 "traddr": "10.0.0.2", 00:16:52.099 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:52.099 "adrfam": "ipv4", 00:16:52.099 "trsvcid": "4420", 00:16:52.099 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:52.099 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt" 00:16:52.099 } 00:16:52.099 } 00:16:52.099 Got JSON-RPC error response 00:16:52.099 GoRPCClient: error on JSON-RPC call 00:16:52.099 16:36:29 -- target/tls.sh@36 -- # killprocess 88978 00:16:52.099 16:36:29 -- common/autotest_common.sh@936 -- # '[' -z 88978 ']' 00:16:52.099 16:36:29 -- common/autotest_common.sh@940 -- # kill -0 88978 00:16:52.099 16:36:29 -- common/autotest_common.sh@941 -- # uname 00:16:52.099 16:36:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:52.099 16:36:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88978 00:16:52.357 killing process with pid 88978 00:16:52.357 Received shutdown signal, test time was about 10.000000 seconds 00:16:52.357 00:16:52.357 Latency(us) 00:16:52.357 [2024-11-16T16:36:29.848Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:52.357 [2024-11-16T16:36:29.848Z] =================================================================================================================== 00:16:52.357 [2024-11-16T16:36:29.848Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:52.357 16:36:29 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:16:52.357 16:36:29 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:16:52.357 16:36:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88978' 00:16:52.357 16:36:29 -- common/autotest_common.sh@955 -- # kill 88978 00:16:52.357 16:36:29 -- common/autotest_common.sh@960 -- # wait 88978 00:16:52.357 16:36:29 -- target/tls.sh@37 -- # return 1 00:16:52.357 16:36:29 -- common/autotest_common.sh@653 -- # es=1 00:16:52.357 16:36:29 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:52.357 16:36:29 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:52.357 16:36:29 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:52.357 16:36:29 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:52.357 16:36:29 -- common/autotest_common.sh@650 -- # local es=0 00:16:52.357 16:36:29 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:52.357 16:36:29 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:16:52.357 16:36:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:52.357 16:36:29 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:16:52.357 16:36:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:52.357 16:36:29 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:52.357 16:36:29 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:52.357 16:36:29 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:52.357 16:36:29 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:16:52.357 16:36:29 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:16:52.357 16:36:29 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:52.357 16:36:29 -- target/tls.sh@28 -- # bdevperf_pid=89018 00:16:52.357 16:36:29 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:52.357 16:36:29 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:52.357 16:36:29 -- target/tls.sh@31 -- # waitforlisten 89018 /var/tmp/bdevperf.sock 00:16:52.357 16:36:29 -- common/autotest_common.sh@829 -- # '[' -z 89018 ']' 00:16:52.357 16:36:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:52.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:52.357 16:36:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:52.357 16:36:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:52.357 16:36:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:52.357 16:36:29 -- common/autotest_common.sh@10 -- # set +x 00:16:52.357 [2024-11-16 16:36:29.843850] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:52.357 [2024-11-16 16:36:29.843949] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89018 ] 00:16:52.616 [2024-11-16 16:36:29.984482] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:52.616 [2024-11-16 16:36:30.056301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:53.553 16:36:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:53.553 16:36:30 -- common/autotest_common.sh@862 -- # return 0 00:16:53.553 16:36:30 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:53.553 [2024-11-16 16:36:30.997669] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:53.553 [2024-11-16 16:36:31.006119] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:53.553 [2024-11-16 16:36:31.006163] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:53.553 [2024-11-16 16:36:31.006245] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:53.553 [2024-11-16 16:36:31.007072] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb3fcc0 (107): Transport endpoint is not connected 00:16:53.553 [2024-11-16 16:36:31.008057] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb3fcc0 (9): Bad file descriptor 00:16:53.553 [2024-11-16 16:36:31.009053] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:53.553 [2024-11-16 16:36:31.009104] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:53.553 [2024-11-16 16:36:31.009115] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:53.553 2024/11/16 16:36:31 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:16:53.553 request: 00:16:53.553 { 00:16:53.553 "method": "bdev_nvme_attach_controller", 00:16:53.553 "params": { 00:16:53.553 "name": "TLSTEST", 00:16:53.553 "trtype": "tcp", 00:16:53.553 "traddr": "10.0.0.2", 00:16:53.553 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:53.553 "adrfam": "ipv4", 00:16:53.553 "trsvcid": "4420", 00:16:53.553 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:53.553 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt" 00:16:53.553 } 00:16:53.553 } 00:16:53.553 Got JSON-RPC error response 00:16:53.553 GoRPCClient: error on JSON-RPC call 00:16:53.553 16:36:31 -- target/tls.sh@36 -- # killprocess 89018 00:16:53.553 16:36:31 -- common/autotest_common.sh@936 -- # '[' -z 89018 ']' 00:16:53.553 16:36:31 -- common/autotest_common.sh@940 -- # kill -0 89018 00:16:53.553 16:36:31 -- common/autotest_common.sh@941 -- # uname 00:16:53.553 16:36:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:53.553 16:36:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89018 00:16:53.812 killing process with pid 89018 00:16:53.812 Received shutdown signal, test time was about 10.000000 seconds 00:16:53.812 00:16:53.812 Latency(us) 00:16:53.812 [2024-11-16T16:36:31.303Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:53.812 [2024-11-16T16:36:31.303Z] =================================================================================================================== 00:16:53.812 [2024-11-16T16:36:31.303Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:53.812 16:36:31 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:16:53.812 16:36:31 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:16:53.812 16:36:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89018' 00:16:53.812 16:36:31 -- common/autotest_common.sh@955 -- # kill 89018 00:16:53.812 16:36:31 -- common/autotest_common.sh@960 -- # wait 89018 00:16:53.812 16:36:31 -- target/tls.sh@37 -- # return 1 00:16:53.812 16:36:31 -- common/autotest_common.sh@653 -- # es=1 00:16:53.812 16:36:31 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:53.812 16:36:31 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:53.812 16:36:31 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:53.812 16:36:31 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:53.812 16:36:31 -- common/autotest_common.sh@650 -- # local es=0 00:16:53.812 16:36:31 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:53.812 16:36:31 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:16:53.812 16:36:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:53.812 16:36:31 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:16:53.812 16:36:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:53.812 16:36:31 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:53.812 16:36:31 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:53.812 16:36:31 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:16:53.812 16:36:31 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:53.812 16:36:31 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:16:53.812 16:36:31 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:53.812 16:36:31 -- target/tls.sh@28 -- # bdevperf_pid=89062 00:16:53.812 16:36:31 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:53.812 16:36:31 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:53.812 16:36:31 -- target/tls.sh@31 -- # waitforlisten 89062 /var/tmp/bdevperf.sock 00:16:53.812 16:36:31 -- common/autotest_common.sh@829 -- # '[' -z 89062 ']' 00:16:53.812 16:36:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:53.812 16:36:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:53.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:53.812 16:36:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:53.812 16:36:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:53.812 16:36:31 -- common/autotest_common.sh@10 -- # set +x 00:16:54.072 [2024-11-16 16:36:31.304883] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:54.072 [2024-11-16 16:36:31.304986] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89062 ] 00:16:54.072 [2024-11-16 16:36:31.445806] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:54.072 [2024-11-16 16:36:31.500709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:55.008 16:36:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:55.008 16:36:32 -- common/autotest_common.sh@862 -- # return 0 00:16:55.008 16:36:32 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:55.008 [2024-11-16 16:36:32.482585] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:55.008 [2024-11-16 16:36:32.487343] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:55.008 [2024-11-16 16:36:32.487399] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:55.008 [2024-11-16 16:36:32.487486] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:55.008 [2024-11-16 16:36:32.488097] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d0cc0 (107): Transport endpoint is not connected 00:16:55.008 [2024-11-16 16:36:32.489058] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d0cc0 (9): Bad file descriptor 00:16:55.008 [2024-11-16 16:36:32.490053] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:16:55.008 [2024-11-16 16:36:32.490102] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:55.008 [2024-11-16 16:36:32.490115] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:16:55.008 2024/11/16 16:36:32 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:16:55.008 request: 00:16:55.008 { 00:16:55.008 "method": "bdev_nvme_attach_controller", 00:16:55.008 "params": { 00:16:55.008 "name": "TLSTEST", 00:16:55.008 "trtype": "tcp", 00:16:55.008 "traddr": "10.0.0.2", 00:16:55.008 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:55.008 "adrfam": "ipv4", 00:16:55.008 "trsvcid": "4420", 00:16:55.008 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:55.009 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt" 00:16:55.009 } 00:16:55.009 } 00:16:55.009 Got JSON-RPC error response 00:16:55.009 GoRPCClient: error on JSON-RPC call 00:16:55.268 16:36:32 -- target/tls.sh@36 -- # killprocess 89062 00:16:55.268 16:36:32 -- common/autotest_common.sh@936 -- # '[' -z 89062 ']' 00:16:55.268 16:36:32 -- common/autotest_common.sh@940 -- # kill -0 89062 00:16:55.268 16:36:32 -- common/autotest_common.sh@941 -- # uname 00:16:55.268 16:36:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:55.268 16:36:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89062 00:16:55.268 16:36:32 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:16:55.268 killing process with pid 89062 00:16:55.268 Received shutdown signal, test time was about 10.000000 seconds 00:16:55.268 00:16:55.268 Latency(us) 00:16:55.268 [2024-11-16T16:36:32.759Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:55.268 [2024-11-16T16:36:32.759Z] =================================================================================================================== 00:16:55.268 [2024-11-16T16:36:32.759Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:55.268 16:36:32 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:16:55.268 16:36:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89062' 00:16:55.268 16:36:32 -- common/autotest_common.sh@955 -- # kill 89062 00:16:55.268 16:36:32 -- common/autotest_common.sh@960 -- # wait 89062 00:16:55.268 16:36:32 -- target/tls.sh@37 -- # return 1 00:16:55.268 16:36:32 -- common/autotest_common.sh@653 -- # es=1 00:16:55.268 16:36:32 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:55.268 16:36:32 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:55.268 16:36:32 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:55.268 16:36:32 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:55.268 16:36:32 -- common/autotest_common.sh@650 -- # local es=0 00:16:55.268 16:36:32 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:55.268 16:36:32 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:16:55.268 16:36:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:55.268 16:36:32 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:16:55.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:55.268 16:36:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:55.268 16:36:32 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:55.268 16:36:32 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:55.268 16:36:32 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:55.268 16:36:32 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:55.268 16:36:32 -- target/tls.sh@23 -- # psk= 00:16:55.268 16:36:32 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:55.268 16:36:32 -- target/tls.sh@28 -- # bdevperf_pid=89109 00:16:55.268 16:36:32 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:55.268 16:36:32 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:55.268 16:36:32 -- target/tls.sh@31 -- # waitforlisten 89109 /var/tmp/bdevperf.sock 00:16:55.268 16:36:32 -- common/autotest_common.sh@829 -- # '[' -z 89109 ']' 00:16:55.268 16:36:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:55.268 16:36:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:55.268 16:36:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:55.268 16:36:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:55.268 16:36:32 -- common/autotest_common.sh@10 -- # set +x 00:16:55.527 [2024-11-16 16:36:32.763657] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:55.527 [2024-11-16 16:36:32.763875] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89109 ] 00:16:55.527 [2024-11-16 16:36:32.897293] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:55.528 [2024-11-16 16:36:32.945815] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:56.467 16:36:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:56.467 16:36:33 -- common/autotest_common.sh@862 -- # return 0 00:16:56.467 16:36:33 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:16:56.726 [2024-11-16 16:36:34.026366] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:56.726 [2024-11-16 16:36:34.027870] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e118c0 (9): Bad file descriptor 00:16:56.726 [2024-11-16 16:36:34.028866] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:56.726 [2024-11-16 16:36:34.028890] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:56.726 [2024-11-16 16:36:34.028911] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:56.726 2024/11/16 16:36:34 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:16:56.726 request: 00:16:56.726 { 00:16:56.726 "method": "bdev_nvme_attach_controller", 00:16:56.726 "params": { 00:16:56.726 "name": "TLSTEST", 00:16:56.726 "trtype": "tcp", 00:16:56.726 "traddr": "10.0.0.2", 00:16:56.726 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:56.726 "adrfam": "ipv4", 00:16:56.726 "trsvcid": "4420", 00:16:56.726 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:16:56.726 } 00:16:56.726 } 00:16:56.726 Got JSON-RPC error response 00:16:56.726 GoRPCClient: error on JSON-RPC call 00:16:56.726 16:36:34 -- target/tls.sh@36 -- # killprocess 89109 00:16:56.726 16:36:34 -- common/autotest_common.sh@936 -- # '[' -z 89109 ']' 00:16:56.726 16:36:34 -- common/autotest_common.sh@940 -- # kill -0 89109 00:16:56.726 16:36:34 -- common/autotest_common.sh@941 -- # uname 00:16:56.726 16:36:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:56.726 16:36:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89109 00:16:56.726 killing process with pid 89109 00:16:56.726 Received shutdown signal, test time was about 10.000000 seconds 00:16:56.726 00:16:56.726 Latency(us) 00:16:56.726 [2024-11-16T16:36:34.217Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:56.726 [2024-11-16T16:36:34.217Z] =================================================================================================================== 00:16:56.726 [2024-11-16T16:36:34.217Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:56.726 16:36:34 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:16:56.726 16:36:34 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:16:56.726 16:36:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89109' 00:16:56.726 16:36:34 -- common/autotest_common.sh@955 -- # kill 89109 00:16:56.726 16:36:34 -- common/autotest_common.sh@960 -- # wait 89109 00:16:56.986 16:36:34 -- target/tls.sh@37 -- # return 1 00:16:56.986 16:36:34 -- common/autotest_common.sh@653 -- # es=1 00:16:56.986 16:36:34 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:56.986 16:36:34 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:56.986 16:36:34 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:56.986 16:36:34 -- target/tls.sh@167 -- # killprocess 88469 00:16:56.986 16:36:34 -- common/autotest_common.sh@936 -- # '[' -z 88469 ']' 00:16:56.986 16:36:34 -- common/autotest_common.sh@940 -- # kill -0 88469 00:16:56.986 16:36:34 -- common/autotest_common.sh@941 -- # uname 00:16:56.986 16:36:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:56.986 16:36:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88469 00:16:56.986 killing process with pid 88469 00:16:56.986 16:36:34 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:56.986 16:36:34 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:56.986 16:36:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88469' 00:16:56.986 16:36:34 -- common/autotest_common.sh@955 -- # kill 88469 00:16:56.986 16:36:34 -- common/autotest_common.sh@960 -- # wait 88469 00:16:57.244 16:36:34 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:16:57.244 16:36:34 -- target/tls.sh@49 -- # local key hash crc 00:16:57.244 16:36:34 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:16:57.244 16:36:34 -- target/tls.sh@51 -- # hash=02 00:16:57.244 16:36:34 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:16:57.244 16:36:34 -- target/tls.sh@52 -- # gzip -1 -c 00:16:57.244 16:36:34 -- target/tls.sh@52 -- # tail -c8 00:16:57.244 16:36:34 -- target/tls.sh@52 -- # head -c 4 00:16:57.244 16:36:34 -- target/tls.sh@52 -- # crc='�e�'\''' 00:16:57.244 16:36:34 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:16:57.244 16:36:34 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:16:57.244 16:36:34 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:57.244 16:36:34 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:57.244 16:36:34 -- target/tls.sh@169 -- # key_long_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:57.244 16:36:34 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:57.244 16:36:34 -- target/tls.sh@171 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:57.244 16:36:34 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:16:57.245 16:36:34 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:57.245 16:36:34 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:57.245 16:36:34 -- common/autotest_common.sh@10 -- # set +x 00:16:57.245 16:36:34 -- nvmf/common.sh@469 -- # nvmfpid=89170 00:16:57.245 16:36:34 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:57.245 16:36:34 -- nvmf/common.sh@470 -- # waitforlisten 89170 00:16:57.245 16:36:34 -- common/autotest_common.sh@829 -- # '[' -z 89170 ']' 00:16:57.245 16:36:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:57.245 16:36:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:57.245 16:36:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:57.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:57.245 16:36:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:57.245 16:36:34 -- common/autotest_common.sh@10 -- # set +x 00:16:57.245 [2024-11-16 16:36:34.644104] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:57.245 [2024-11-16 16:36:34.644202] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:57.503 [2024-11-16 16:36:34.785389] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.503 [2024-11-16 16:36:34.872746] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:57.503 [2024-11-16 16:36:34.872897] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:57.503 [2024-11-16 16:36:34.872912] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:57.503 [2024-11-16 16:36:34.872920] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:57.503 [2024-11-16 16:36:34.872956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:58.439 16:36:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:58.439 16:36:35 -- common/autotest_common.sh@862 -- # return 0 00:16:58.439 16:36:35 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:58.439 16:36:35 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:58.439 16:36:35 -- common/autotest_common.sh@10 -- # set +x 00:16:58.439 16:36:35 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:58.439 16:36:35 -- target/tls.sh@174 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:58.439 16:36:35 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:58.439 16:36:35 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:58.439 [2024-11-16 16:36:35.908517] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:58.439 16:36:35 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:58.697 16:36:36 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:58.955 [2024-11-16 16:36:36.300555] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:58.955 [2024-11-16 16:36:36.300795] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:58.955 16:36:36 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:59.214 malloc0 00:16:59.214 16:36:36 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:59.472 16:36:36 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:59.731 16:36:36 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:59.731 16:36:36 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:59.731 16:36:36 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:59.731 16:36:36 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:59.731 16:36:36 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:16:59.731 16:36:36 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:59.731 16:36:36 -- target/tls.sh@28 -- # bdevperf_pid=89271 00:16:59.731 16:36:36 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:59.731 16:36:36 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:59.731 16:36:36 -- target/tls.sh@31 -- # waitforlisten 89271 /var/tmp/bdevperf.sock 00:16:59.731 16:36:36 -- common/autotest_common.sh@829 -- # '[' -z 89271 ']' 00:16:59.731 16:36:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:59.731 16:36:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:59.731 16:36:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:59.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:59.731 16:36:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:59.731 16:36:36 -- common/autotest_common.sh@10 -- # set +x 00:16:59.731 [2024-11-16 16:36:37.031171] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:59.731 [2024-11-16 16:36:37.031492] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89271 ] 00:16:59.731 [2024-11-16 16:36:37.171889] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:59.990 [2024-11-16 16:36:37.236526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:00.557 16:36:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:00.557 16:36:37 -- common/autotest_common.sh@862 -- # return 0 00:17:00.557 16:36:37 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:00.816 [2024-11-16 16:36:38.116434] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:00.816 TLSTESTn1 00:17:00.816 16:36:38 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:00.816 Running I/O for 10 seconds... 00:17:13.019 00:17:13.019 Latency(us) 00:17:13.019 [2024-11-16T16:36:50.510Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:13.019 [2024-11-16T16:36:50.510Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:13.019 Verification LBA range: start 0x0 length 0x2000 00:17:13.019 TLSTESTn1 : 10.02 4697.06 18.35 0.00 0.00 27201.38 6613.18 149660.39 00:17:13.019 [2024-11-16T16:36:50.510Z] =================================================================================================================== 00:17:13.019 [2024-11-16T16:36:50.510Z] Total : 4697.06 18.35 0.00 0.00 27201.38 6613.18 149660.39 00:17:13.019 0 00:17:13.019 16:36:48 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:13.019 16:36:48 -- target/tls.sh@45 -- # killprocess 89271 00:17:13.019 16:36:48 -- common/autotest_common.sh@936 -- # '[' -z 89271 ']' 00:17:13.019 16:36:48 -- common/autotest_common.sh@940 -- # kill -0 89271 00:17:13.019 16:36:48 -- common/autotest_common.sh@941 -- # uname 00:17:13.019 16:36:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:13.019 16:36:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89271 00:17:13.019 16:36:48 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:13.019 16:36:48 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:13.019 killing process with pid 89271 00:17:13.019 16:36:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89271' 00:17:13.019 16:36:48 -- common/autotest_common.sh@955 -- # kill 89271 00:17:13.019 Received shutdown signal, test time was about 10.000000 seconds 00:17:13.019 00:17:13.019 Latency(us) 00:17:13.019 [2024-11-16T16:36:50.510Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:13.019 [2024-11-16T16:36:50.511Z] =================================================================================================================== 00:17:13.020 [2024-11-16T16:36:50.511Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:13.020 16:36:48 -- common/autotest_common.sh@960 -- # wait 89271 00:17:13.020 16:36:48 -- target/tls.sh@179 -- # chmod 0666 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:13.020 16:36:48 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:13.020 16:36:48 -- common/autotest_common.sh@650 -- # local es=0 00:17:13.020 16:36:48 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:13.020 16:36:48 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:13.020 16:36:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:13.020 16:36:48 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:13.020 16:36:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:13.020 16:36:48 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:13.020 16:36:48 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:13.020 16:36:48 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:13.020 16:36:48 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:13.020 16:36:48 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:17:13.020 16:36:48 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:13.020 16:36:48 -- target/tls.sh@28 -- # bdevperf_pid=89425 00:17:13.020 16:36:48 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:13.020 16:36:48 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:13.020 16:36:48 -- target/tls.sh@31 -- # waitforlisten 89425 /var/tmp/bdevperf.sock 00:17:13.020 16:36:48 -- common/autotest_common.sh@829 -- # '[' -z 89425 ']' 00:17:13.020 16:36:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:13.020 16:36:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:13.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:13.020 16:36:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:13.020 16:36:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:13.020 16:36:48 -- common/autotest_common.sh@10 -- # set +x 00:17:13.020 [2024-11-16 16:36:48.614743] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:13.020 [2024-11-16 16:36:48.614841] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89425 ] 00:17:13.020 [2024-11-16 16:36:48.754023] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.020 [2024-11-16 16:36:48.803717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:13.020 16:36:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:13.020 16:36:49 -- common/autotest_common.sh@862 -- # return 0 00:17:13.020 16:36:49 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:13.020 [2024-11-16 16:36:49.839758] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:13.020 [2024-11-16 16:36:49.839800] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:13.020 2024/11/16 16:36:49 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-22 Msg=Could not retrieve PSK from file: /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:13.020 request: 00:17:13.020 { 00:17:13.020 "method": "bdev_nvme_attach_controller", 00:17:13.020 "params": { 00:17:13.020 "name": "TLSTEST", 00:17:13.020 "trtype": "tcp", 00:17:13.020 "traddr": "10.0.0.2", 00:17:13.020 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:13.020 "adrfam": "ipv4", 00:17:13.020 "trsvcid": "4420", 00:17:13.020 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:13.020 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:13.020 } 00:17:13.020 } 00:17:13.020 Got JSON-RPC error response 00:17:13.020 GoRPCClient: error on JSON-RPC call 00:17:13.020 16:36:49 -- target/tls.sh@36 -- # killprocess 89425 00:17:13.020 16:36:49 -- common/autotest_common.sh@936 -- # '[' -z 89425 ']' 00:17:13.020 16:36:49 -- common/autotest_common.sh@940 -- # kill -0 89425 00:17:13.020 16:36:49 -- common/autotest_common.sh@941 -- # uname 00:17:13.020 16:36:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:13.020 16:36:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89425 00:17:13.020 16:36:49 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:13.020 16:36:49 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:13.020 16:36:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89425' 00:17:13.020 killing process with pid 89425 00:17:13.020 16:36:49 -- common/autotest_common.sh@955 -- # kill 89425 00:17:13.020 Received shutdown signal, test time was about 10.000000 seconds 00:17:13.020 00:17:13.020 Latency(us) 00:17:13.020 [2024-11-16T16:36:50.511Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:13.020 [2024-11-16T16:36:50.511Z] =================================================================================================================== 00:17:13.020 [2024-11-16T16:36:50.511Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:13.020 16:36:49 -- common/autotest_common.sh@960 -- # wait 89425 00:17:13.020 16:36:50 -- target/tls.sh@37 -- # return 1 00:17:13.020 16:36:50 -- common/autotest_common.sh@653 -- # es=1 00:17:13.020 16:36:50 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:13.020 16:36:50 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:13.020 16:36:50 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:13.020 16:36:50 -- target/tls.sh@183 -- # killprocess 89170 00:17:13.020 16:36:50 -- common/autotest_common.sh@936 -- # '[' -z 89170 ']' 00:17:13.020 16:36:50 -- common/autotest_common.sh@940 -- # kill -0 89170 00:17:13.020 16:36:50 -- common/autotest_common.sh@941 -- # uname 00:17:13.020 16:36:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:13.020 16:36:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89170 00:17:13.020 16:36:50 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:13.020 16:36:50 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:13.020 killing process with pid 89170 00:17:13.020 16:36:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89170' 00:17:13.020 16:36:50 -- common/autotest_common.sh@955 -- # kill 89170 00:17:13.020 16:36:50 -- common/autotest_common.sh@960 -- # wait 89170 00:17:13.020 16:36:50 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:17:13.020 16:36:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:13.020 16:36:50 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:13.020 16:36:50 -- common/autotest_common.sh@10 -- # set +x 00:17:13.020 16:36:50 -- nvmf/common.sh@469 -- # nvmfpid=89476 00:17:13.020 16:36:50 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:13.020 16:36:50 -- nvmf/common.sh@470 -- # waitforlisten 89476 00:17:13.020 16:36:50 -- common/autotest_common.sh@829 -- # '[' -z 89476 ']' 00:17:13.020 16:36:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:13.020 16:36:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:13.020 16:36:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:13.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:13.020 16:36:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:13.020 16:36:50 -- common/autotest_common.sh@10 -- # set +x 00:17:13.020 [2024-11-16 16:36:50.422939] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:13.020 [2024-11-16 16:36:50.423004] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:13.278 [2024-11-16 16:36:50.554007] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.278 [2024-11-16 16:36:50.644943] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:13.278 [2024-11-16 16:36:50.645117] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:13.278 [2024-11-16 16:36:50.645137] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:13.278 [2024-11-16 16:36:50.645148] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:13.278 [2024-11-16 16:36:50.645182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:14.215 16:36:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:14.215 16:36:51 -- common/autotest_common.sh@862 -- # return 0 00:17:14.215 16:36:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:14.215 16:36:51 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:14.215 16:36:51 -- common/autotest_common.sh@10 -- # set +x 00:17:14.215 16:36:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:14.215 16:36:51 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:14.215 16:36:51 -- common/autotest_common.sh@650 -- # local es=0 00:17:14.215 16:36:51 -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:14.215 16:36:51 -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:17:14.215 16:36:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:14.215 16:36:51 -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:17:14.215 16:36:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:14.215 16:36:51 -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:14.215 16:36:51 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:14.215 16:36:51 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:14.215 [2024-11-16 16:36:51.647528] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:14.215 16:36:51 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:14.473 16:36:51 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:14.733 [2024-11-16 16:36:52.119616] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:14.733 [2024-11-16 16:36:52.119892] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:14.733 16:36:52 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:14.992 malloc0 00:17:14.992 16:36:52 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:15.251 16:36:52 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:15.510 [2024-11-16 16:36:52.790739] tcp.c:3551:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:15.510 [2024-11-16 16:36:52.790789] tcp.c:3620:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:17:15.510 [2024-11-16 16:36:52.790822] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:17:15.510 2024/11/16 16:36:52 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:17:15.510 request: 00:17:15.510 { 00:17:15.510 "method": "nvmf_subsystem_add_host", 00:17:15.510 "params": { 00:17:15.510 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:15.510 "host": "nqn.2016-06.io.spdk:host1", 00:17:15.510 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:15.510 } 00:17:15.510 } 00:17:15.510 Got JSON-RPC error response 00:17:15.510 GoRPCClient: error on JSON-RPC call 00:17:15.510 16:36:52 -- common/autotest_common.sh@653 -- # es=1 00:17:15.510 16:36:52 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:15.510 16:36:52 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:15.510 16:36:52 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:15.510 16:36:52 -- target/tls.sh@189 -- # killprocess 89476 00:17:15.510 16:36:52 -- common/autotest_common.sh@936 -- # '[' -z 89476 ']' 00:17:15.510 16:36:52 -- common/autotest_common.sh@940 -- # kill -0 89476 00:17:15.510 16:36:52 -- common/autotest_common.sh@941 -- # uname 00:17:15.510 16:36:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:15.510 16:36:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89476 00:17:15.510 16:36:52 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:15.510 16:36:52 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:15.510 killing process with pid 89476 00:17:15.510 16:36:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89476' 00:17:15.510 16:36:52 -- common/autotest_common.sh@955 -- # kill 89476 00:17:15.510 16:36:52 -- common/autotest_common.sh@960 -- # wait 89476 00:17:15.770 16:36:53 -- target/tls.sh@190 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:15.770 16:36:53 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:17:15.770 16:36:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:15.770 16:36:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:15.770 16:36:53 -- common/autotest_common.sh@10 -- # set +x 00:17:15.770 16:36:53 -- nvmf/common.sh@469 -- # nvmfpid=89592 00:17:15.770 16:36:53 -- nvmf/common.sh@470 -- # waitforlisten 89592 00:17:15.770 16:36:53 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:15.770 16:36:53 -- common/autotest_common.sh@829 -- # '[' -z 89592 ']' 00:17:15.770 16:36:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:15.770 16:36:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:15.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:15.770 16:36:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:15.770 16:36:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:15.770 16:36:53 -- common/autotest_common.sh@10 -- # set +x 00:17:15.770 [2024-11-16 16:36:53.103393] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:15.770 [2024-11-16 16:36:53.103482] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:15.770 [2024-11-16 16:36:53.241397] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:16.029 [2024-11-16 16:36:53.297579] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:16.029 [2024-11-16 16:36:53.297745] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:16.029 [2024-11-16 16:36:53.297758] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:16.029 [2024-11-16 16:36:53.297766] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:16.029 [2024-11-16 16:36:53.297792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:16.966 16:36:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:16.966 16:36:54 -- common/autotest_common.sh@862 -- # return 0 00:17:16.966 16:36:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:16.966 16:36:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:16.966 16:36:54 -- common/autotest_common.sh@10 -- # set +x 00:17:16.966 16:36:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:16.966 16:36:54 -- target/tls.sh@194 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:16.966 16:36:54 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:16.966 16:36:54 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:16.966 [2024-11-16 16:36:54.414561] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:16.966 16:36:54 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:17.224 16:36:54 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:17.483 [2024-11-16 16:36:54.882623] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:17.483 [2024-11-16 16:36:54.882853] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:17.483 16:36:54 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:17.742 malloc0 00:17:17.742 16:36:55 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:18.000 16:36:55 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:18.258 16:36:55 -- target/tls.sh@197 -- # bdevperf_pid=89690 00:17:18.258 16:36:55 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:18.258 16:36:55 -- target/tls.sh@200 -- # waitforlisten 89690 /var/tmp/bdevperf.sock 00:17:18.258 16:36:55 -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:18.258 16:36:55 -- common/autotest_common.sh@829 -- # '[' -z 89690 ']' 00:17:18.258 16:36:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:18.258 16:36:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:18.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:18.258 16:36:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:18.259 16:36:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:18.259 16:36:55 -- common/autotest_common.sh@10 -- # set +x 00:17:18.259 [2024-11-16 16:36:55.623623] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:18.259 [2024-11-16 16:36:55.623725] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89690 ] 00:17:18.518 [2024-11-16 16:36:55.757822] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.518 [2024-11-16 16:36:55.820140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:19.086 16:36:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:19.086 16:36:56 -- common/autotest_common.sh@862 -- # return 0 00:17:19.086 16:36:56 -- target/tls.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:19.345 [2024-11-16 16:36:56.793610] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:19.603 TLSTESTn1 00:17:19.603 16:36:56 -- target/tls.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:17:19.862 16:36:57 -- target/tls.sh@205 -- # tgtconf='{ 00:17:19.862 "subsystems": [ 00:17:19.862 { 00:17:19.862 "subsystem": "iobuf", 00:17:19.862 "config": [ 00:17:19.862 { 00:17:19.862 "method": "iobuf_set_options", 00:17:19.862 "params": { 00:17:19.862 "large_bufsize": 135168, 00:17:19.862 "large_pool_count": 1024, 00:17:19.862 "small_bufsize": 8192, 00:17:19.862 "small_pool_count": 8192 00:17:19.862 } 00:17:19.862 } 00:17:19.862 ] 00:17:19.862 }, 00:17:19.863 { 00:17:19.863 "subsystem": "sock", 00:17:19.863 "config": [ 00:17:19.863 { 00:17:19.863 "method": "sock_impl_set_options", 00:17:19.863 "params": { 00:17:19.863 "enable_ktls": false, 00:17:19.863 "enable_placement_id": 0, 00:17:19.863 "enable_quickack": false, 00:17:19.863 "enable_recv_pipe": true, 00:17:19.863 "enable_zerocopy_send_client": false, 00:17:19.863 "enable_zerocopy_send_server": true, 00:17:19.863 "impl_name": "posix", 00:17:19.863 "recv_buf_size": 2097152, 00:17:19.863 "send_buf_size": 2097152, 00:17:19.863 "tls_version": 0, 00:17:19.863 "zerocopy_threshold": 0 00:17:19.863 } 00:17:19.863 }, 00:17:19.863 { 00:17:19.863 "method": "sock_impl_set_options", 00:17:19.863 "params": { 00:17:19.863 "enable_ktls": false, 00:17:19.863 "enable_placement_id": 0, 00:17:19.863 "enable_quickack": false, 00:17:19.863 "enable_recv_pipe": true, 00:17:19.863 "enable_zerocopy_send_client": false, 00:17:19.863 "enable_zerocopy_send_server": true, 00:17:19.863 "impl_name": "ssl", 00:17:19.863 "recv_buf_size": 4096, 00:17:19.863 "send_buf_size": 4096, 00:17:19.863 "tls_version": 0, 00:17:19.863 "zerocopy_threshold": 0 00:17:19.863 } 00:17:19.863 } 00:17:19.863 ] 00:17:19.863 }, 00:17:19.863 { 00:17:19.863 "subsystem": "vmd", 00:17:19.863 "config": [] 00:17:19.863 }, 00:17:19.863 { 00:17:19.863 "subsystem": "accel", 00:17:19.863 "config": [ 00:17:19.863 { 00:17:19.863 "method": "accel_set_options", 00:17:19.863 "params": { 00:17:19.863 "buf_count": 2048, 00:17:19.863 "large_cache_size": 16, 00:17:19.863 "sequence_count": 2048, 00:17:19.863 "small_cache_size": 128, 00:17:19.863 "task_count": 2048 00:17:19.863 } 00:17:19.863 } 00:17:19.863 ] 00:17:19.863 }, 00:17:19.863 { 00:17:19.863 "subsystem": "bdev", 00:17:19.863 "config": [ 00:17:19.863 { 00:17:19.863 "method": "bdev_set_options", 00:17:19.863 "params": { 00:17:19.863 "bdev_auto_examine": true, 00:17:19.863 "bdev_io_cache_size": 256, 00:17:19.863 "bdev_io_pool_size": 65535, 00:17:19.863 "iobuf_large_cache_size": 16, 00:17:19.863 "iobuf_small_cache_size": 128 00:17:19.863 } 00:17:19.863 }, 00:17:19.863 { 00:17:19.863 "method": "bdev_raid_set_options", 00:17:19.863 "params": { 00:17:19.863 "process_window_size_kb": 1024 00:17:19.863 } 00:17:19.863 }, 00:17:19.863 { 00:17:19.863 "method": "bdev_iscsi_set_options", 00:17:19.863 "params": { 00:17:19.863 "timeout_sec": 30 00:17:19.863 } 00:17:19.863 }, 00:17:19.863 { 00:17:19.863 "method": "bdev_nvme_set_options", 00:17:19.863 "params": { 00:17:19.863 "action_on_timeout": "none", 00:17:19.863 "allow_accel_sequence": false, 00:17:19.863 "arbitration_burst": 0, 00:17:19.863 "bdev_retry_count": 3, 00:17:19.863 "ctrlr_loss_timeout_sec": 0, 00:17:19.863 "delay_cmd_submit": true, 00:17:19.863 "fast_io_fail_timeout_sec": 0, 00:17:19.863 "generate_uuids": false, 00:17:19.863 "high_priority_weight": 0, 00:17:19.863 "io_path_stat": false, 00:17:19.863 "io_queue_requests": 0, 00:17:19.863 "keep_alive_timeout_ms": 10000, 00:17:19.863 "low_priority_weight": 0, 00:17:19.863 "medium_priority_weight": 0, 00:17:19.863 "nvme_adminq_poll_period_us": 10000, 00:17:19.863 "nvme_ioq_poll_period_us": 0, 00:17:19.863 "reconnect_delay_sec": 0, 00:17:19.863 "timeout_admin_us": 0, 00:17:19.863 "timeout_us": 0, 00:17:19.863 "transport_ack_timeout": 0, 00:17:19.863 "transport_retry_count": 4, 00:17:19.863 "transport_tos": 0 00:17:19.863 } 00:17:19.863 }, 00:17:19.863 { 00:17:19.863 "method": "bdev_nvme_set_hotplug", 00:17:19.863 "params": { 00:17:19.863 "enable": false, 00:17:19.863 "period_us": 100000 00:17:19.863 } 00:17:19.863 }, 00:17:19.863 { 00:17:19.863 "method": "bdev_malloc_create", 00:17:19.863 "params": { 00:17:19.863 "block_size": 4096, 00:17:19.863 "name": "malloc0", 00:17:19.863 "num_blocks": 8192, 00:17:19.863 "optimal_io_boundary": 0, 00:17:19.863 "physical_block_size": 4096, 00:17:19.863 "uuid": "a3be7850-4a61-428c-9dfd-32ed33d4f343" 00:17:19.863 } 00:17:19.863 }, 00:17:19.863 { 00:17:19.863 "method": "bdev_wait_for_examine" 00:17:19.863 } 00:17:19.863 ] 00:17:19.863 }, 00:17:19.863 { 00:17:19.863 "subsystem": "nbd", 00:17:19.863 "config": [] 00:17:19.863 }, 00:17:19.863 { 00:17:19.863 "subsystem": "scheduler", 00:17:19.863 "config": [ 00:17:19.863 { 00:17:19.863 "method": "framework_set_scheduler", 00:17:19.863 "params": { 00:17:19.863 "name": "static" 00:17:19.863 } 00:17:19.863 } 00:17:19.863 ] 00:17:19.863 }, 00:17:19.863 { 00:17:19.863 "subsystem": "nvmf", 00:17:19.863 "config": [ 00:17:19.863 { 00:17:19.863 "method": "nvmf_set_config", 00:17:19.863 "params": { 00:17:19.863 "admin_cmd_passthru": { 00:17:19.863 "identify_ctrlr": false 00:17:19.863 }, 00:17:19.863 "discovery_filter": "match_any" 00:17:19.863 } 00:17:19.863 }, 00:17:19.863 { 00:17:19.863 "method": "nvmf_set_max_subsystems", 00:17:19.863 "params": { 00:17:19.863 "max_subsystems": 1024 00:17:19.863 } 00:17:19.863 }, 00:17:19.863 { 00:17:19.863 "method": "nvmf_set_crdt", 00:17:19.863 "params": { 00:17:19.863 "crdt1": 0, 00:17:19.863 "crdt2": 0, 00:17:19.863 "crdt3": 0 00:17:19.863 } 00:17:19.863 }, 00:17:19.863 { 00:17:19.863 "method": "nvmf_create_transport", 00:17:19.863 "params": { 00:17:19.863 "abort_timeout_sec": 1, 00:17:19.863 "buf_cache_size": 4294967295, 00:17:19.863 "c2h_success": false, 00:17:19.863 "dif_insert_or_strip": false, 00:17:19.863 "in_capsule_data_size": 4096, 00:17:19.863 "io_unit_size": 131072, 00:17:19.863 "max_aq_depth": 128, 00:17:19.863 "max_io_qpairs_per_ctrlr": 127, 00:17:19.863 "max_io_size": 131072, 00:17:19.863 "max_queue_depth": 128, 00:17:19.863 "num_shared_buffers": 511, 00:17:19.863 "sock_priority": 0, 00:17:19.863 "trtype": "TCP", 00:17:19.863 "zcopy": false 00:17:19.863 } 00:17:19.863 }, 00:17:19.863 { 00:17:19.863 "method": "nvmf_create_subsystem", 00:17:19.863 "params": { 00:17:19.863 "allow_any_host": false, 00:17:19.863 "ana_reporting": false, 00:17:19.863 "max_cntlid": 65519, 00:17:19.863 "max_namespaces": 10, 00:17:19.863 "min_cntlid": 1, 00:17:19.863 "model_number": "SPDK bdev Controller", 00:17:19.863 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:19.863 "serial_number": "SPDK00000000000001" 00:17:19.863 } 00:17:19.863 }, 00:17:19.863 { 00:17:19.863 "method": "nvmf_subsystem_add_host", 00:17:19.863 "params": { 00:17:19.863 "host": "nqn.2016-06.io.spdk:host1", 00:17:19.863 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:19.863 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:19.863 } 00:17:19.863 }, 00:17:19.863 { 00:17:19.863 "method": "nvmf_subsystem_add_ns", 00:17:19.863 "params": { 00:17:19.863 "namespace": { 00:17:19.863 "bdev_name": "malloc0", 00:17:19.863 "nguid": "A3BE78504A61428C9DFD32ED33D4F343", 00:17:19.863 "nsid": 1, 00:17:19.863 "uuid": "a3be7850-4a61-428c-9dfd-32ed33d4f343" 00:17:19.863 }, 00:17:19.863 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:17:19.863 } 00:17:19.863 }, 00:17:19.863 { 00:17:19.863 "method": "nvmf_subsystem_add_listener", 00:17:19.863 "params": { 00:17:19.863 "listen_address": { 00:17:19.863 "adrfam": "IPv4", 00:17:19.863 "traddr": "10.0.0.2", 00:17:19.863 "trsvcid": "4420", 00:17:19.863 "trtype": "TCP" 00:17:19.863 }, 00:17:19.863 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:19.864 "secure_channel": true 00:17:19.864 } 00:17:19.864 } 00:17:19.864 ] 00:17:19.864 } 00:17:19.864 ] 00:17:19.864 }' 00:17:19.864 16:36:57 -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:20.123 16:36:57 -- target/tls.sh@206 -- # bdevperfconf='{ 00:17:20.123 "subsystems": [ 00:17:20.123 { 00:17:20.123 "subsystem": "iobuf", 00:17:20.123 "config": [ 00:17:20.123 { 00:17:20.123 "method": "iobuf_set_options", 00:17:20.123 "params": { 00:17:20.123 "large_bufsize": 135168, 00:17:20.123 "large_pool_count": 1024, 00:17:20.123 "small_bufsize": 8192, 00:17:20.123 "small_pool_count": 8192 00:17:20.123 } 00:17:20.123 } 00:17:20.123 ] 00:17:20.123 }, 00:17:20.123 { 00:17:20.123 "subsystem": "sock", 00:17:20.123 "config": [ 00:17:20.123 { 00:17:20.123 "method": "sock_impl_set_options", 00:17:20.123 "params": { 00:17:20.123 "enable_ktls": false, 00:17:20.123 "enable_placement_id": 0, 00:17:20.123 "enable_quickack": false, 00:17:20.123 "enable_recv_pipe": true, 00:17:20.123 "enable_zerocopy_send_client": false, 00:17:20.123 "enable_zerocopy_send_server": true, 00:17:20.123 "impl_name": "posix", 00:17:20.123 "recv_buf_size": 2097152, 00:17:20.123 "send_buf_size": 2097152, 00:17:20.123 "tls_version": 0, 00:17:20.123 "zerocopy_threshold": 0 00:17:20.123 } 00:17:20.123 }, 00:17:20.123 { 00:17:20.123 "method": "sock_impl_set_options", 00:17:20.123 "params": { 00:17:20.123 "enable_ktls": false, 00:17:20.123 "enable_placement_id": 0, 00:17:20.123 "enable_quickack": false, 00:17:20.123 "enable_recv_pipe": true, 00:17:20.123 "enable_zerocopy_send_client": false, 00:17:20.123 "enable_zerocopy_send_server": true, 00:17:20.123 "impl_name": "ssl", 00:17:20.123 "recv_buf_size": 4096, 00:17:20.123 "send_buf_size": 4096, 00:17:20.123 "tls_version": 0, 00:17:20.123 "zerocopy_threshold": 0 00:17:20.123 } 00:17:20.123 } 00:17:20.123 ] 00:17:20.123 }, 00:17:20.123 { 00:17:20.123 "subsystem": "vmd", 00:17:20.123 "config": [] 00:17:20.123 }, 00:17:20.123 { 00:17:20.123 "subsystem": "accel", 00:17:20.123 "config": [ 00:17:20.123 { 00:17:20.123 "method": "accel_set_options", 00:17:20.123 "params": { 00:17:20.123 "buf_count": 2048, 00:17:20.123 "large_cache_size": 16, 00:17:20.123 "sequence_count": 2048, 00:17:20.123 "small_cache_size": 128, 00:17:20.123 "task_count": 2048 00:17:20.123 } 00:17:20.123 } 00:17:20.123 ] 00:17:20.123 }, 00:17:20.123 { 00:17:20.123 "subsystem": "bdev", 00:17:20.123 "config": [ 00:17:20.123 { 00:17:20.123 "method": "bdev_set_options", 00:17:20.123 "params": { 00:17:20.123 "bdev_auto_examine": true, 00:17:20.123 "bdev_io_cache_size": 256, 00:17:20.123 "bdev_io_pool_size": 65535, 00:17:20.123 "iobuf_large_cache_size": 16, 00:17:20.123 "iobuf_small_cache_size": 128 00:17:20.123 } 00:17:20.123 }, 00:17:20.123 { 00:17:20.123 "method": "bdev_raid_set_options", 00:17:20.123 "params": { 00:17:20.123 "process_window_size_kb": 1024 00:17:20.123 } 00:17:20.123 }, 00:17:20.123 { 00:17:20.123 "method": "bdev_iscsi_set_options", 00:17:20.123 "params": { 00:17:20.123 "timeout_sec": 30 00:17:20.123 } 00:17:20.123 }, 00:17:20.123 { 00:17:20.123 "method": "bdev_nvme_set_options", 00:17:20.123 "params": { 00:17:20.123 "action_on_timeout": "none", 00:17:20.123 "allow_accel_sequence": false, 00:17:20.123 "arbitration_burst": 0, 00:17:20.123 "bdev_retry_count": 3, 00:17:20.123 "ctrlr_loss_timeout_sec": 0, 00:17:20.123 "delay_cmd_submit": true, 00:17:20.123 "fast_io_fail_timeout_sec": 0, 00:17:20.123 "generate_uuids": false, 00:17:20.123 "high_priority_weight": 0, 00:17:20.123 "io_path_stat": false, 00:17:20.123 "io_queue_requests": 512, 00:17:20.123 "keep_alive_timeout_ms": 10000, 00:17:20.123 "low_priority_weight": 0, 00:17:20.123 "medium_priority_weight": 0, 00:17:20.123 "nvme_adminq_poll_period_us": 10000, 00:17:20.123 "nvme_ioq_poll_period_us": 0, 00:17:20.123 "reconnect_delay_sec": 0, 00:17:20.123 "timeout_admin_us": 0, 00:17:20.123 "timeout_us": 0, 00:17:20.123 "transport_ack_timeout": 0, 00:17:20.123 "transport_retry_count": 4, 00:17:20.123 "transport_tos": 0 00:17:20.123 } 00:17:20.123 }, 00:17:20.123 { 00:17:20.123 "method": "bdev_nvme_attach_controller", 00:17:20.123 "params": { 00:17:20.123 "adrfam": "IPv4", 00:17:20.123 "ctrlr_loss_timeout_sec": 0, 00:17:20.123 "ddgst": false, 00:17:20.123 "fast_io_fail_timeout_sec": 0, 00:17:20.123 "hdgst": false, 00:17:20.123 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:20.123 "name": "TLSTEST", 00:17:20.123 "prchk_guard": false, 00:17:20.123 "prchk_reftag": false, 00:17:20.123 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:17:20.123 "reconnect_delay_sec": 0, 00:17:20.123 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:20.123 "traddr": "10.0.0.2", 00:17:20.123 "trsvcid": "4420", 00:17:20.123 "trtype": "TCP" 00:17:20.123 } 00:17:20.123 }, 00:17:20.123 { 00:17:20.123 "method": "bdev_nvme_set_hotplug", 00:17:20.123 "params": { 00:17:20.123 "enable": false, 00:17:20.123 "period_us": 100000 00:17:20.123 } 00:17:20.123 }, 00:17:20.123 { 00:17:20.123 "method": "bdev_wait_for_examine" 00:17:20.123 } 00:17:20.123 ] 00:17:20.123 }, 00:17:20.123 { 00:17:20.123 "subsystem": "nbd", 00:17:20.123 "config": [] 00:17:20.123 } 00:17:20.123 ] 00:17:20.123 }' 00:17:20.123 16:36:57 -- target/tls.sh@208 -- # killprocess 89690 00:17:20.123 16:36:57 -- common/autotest_common.sh@936 -- # '[' -z 89690 ']' 00:17:20.123 16:36:57 -- common/autotest_common.sh@940 -- # kill -0 89690 00:17:20.123 16:36:57 -- common/autotest_common.sh@941 -- # uname 00:17:20.123 16:36:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:20.123 16:36:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89690 00:17:20.123 16:36:57 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:20.123 16:36:57 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:20.123 16:36:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89690' 00:17:20.123 killing process with pid 89690 00:17:20.123 16:36:57 -- common/autotest_common.sh@955 -- # kill 89690 00:17:20.123 16:36:57 -- common/autotest_common.sh@960 -- # wait 89690 00:17:20.123 Received shutdown signal, test time was about 10.000000 seconds 00:17:20.123 00:17:20.123 Latency(us) 00:17:20.123 [2024-11-16T16:36:57.614Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:20.123 [2024-11-16T16:36:57.614Z] =================================================================================================================== 00:17:20.123 [2024-11-16T16:36:57.614Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:20.383 16:36:57 -- target/tls.sh@209 -- # killprocess 89592 00:17:20.383 16:36:57 -- common/autotest_common.sh@936 -- # '[' -z 89592 ']' 00:17:20.383 16:36:57 -- common/autotest_common.sh@940 -- # kill -0 89592 00:17:20.383 16:36:57 -- common/autotest_common.sh@941 -- # uname 00:17:20.383 16:36:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:20.383 16:36:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89592 00:17:20.383 16:36:57 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:20.383 killing process with pid 89592 00:17:20.383 16:36:57 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:20.383 16:36:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89592' 00:17:20.383 16:36:57 -- common/autotest_common.sh@955 -- # kill 89592 00:17:20.383 16:36:57 -- common/autotest_common.sh@960 -- # wait 89592 00:17:20.643 16:36:57 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:17:20.643 16:36:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:20.643 16:36:57 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:20.643 16:36:57 -- common/autotest_common.sh@10 -- # set +x 00:17:20.643 16:36:57 -- target/tls.sh@212 -- # echo '{ 00:17:20.643 "subsystems": [ 00:17:20.643 { 00:17:20.643 "subsystem": "iobuf", 00:17:20.643 "config": [ 00:17:20.643 { 00:17:20.643 "method": "iobuf_set_options", 00:17:20.643 "params": { 00:17:20.643 "large_bufsize": 135168, 00:17:20.643 "large_pool_count": 1024, 00:17:20.643 "small_bufsize": 8192, 00:17:20.643 "small_pool_count": 8192 00:17:20.643 } 00:17:20.643 } 00:17:20.643 ] 00:17:20.643 }, 00:17:20.643 { 00:17:20.643 "subsystem": "sock", 00:17:20.643 "config": [ 00:17:20.643 { 00:17:20.643 "method": "sock_impl_set_options", 00:17:20.643 "params": { 00:17:20.643 "enable_ktls": false, 00:17:20.643 "enable_placement_id": 0, 00:17:20.643 "enable_quickack": false, 00:17:20.643 "enable_recv_pipe": true, 00:17:20.643 "enable_zerocopy_send_client": false, 00:17:20.643 "enable_zerocopy_send_server": true, 00:17:20.643 "impl_name": "posix", 00:17:20.643 "recv_buf_size": 2097152, 00:17:20.643 "send_buf_size": 2097152, 00:17:20.643 "tls_version": 0, 00:17:20.643 "zerocopy_threshold": 0 00:17:20.643 } 00:17:20.643 }, 00:17:20.643 { 00:17:20.643 "method": "sock_impl_set_options", 00:17:20.643 "params": { 00:17:20.643 "enable_ktls": false, 00:17:20.643 "enable_placement_id": 0, 00:17:20.643 "enable_quickack": false, 00:17:20.643 "enable_recv_pipe": true, 00:17:20.643 "enable_zerocopy_send_client": false, 00:17:20.643 "enable_zerocopy_send_server": true, 00:17:20.643 "impl_name": "ssl", 00:17:20.643 "recv_buf_size": 4096, 00:17:20.643 "send_buf_size": 4096, 00:17:20.643 "tls_version": 0, 00:17:20.643 "zerocopy_threshold": 0 00:17:20.643 } 00:17:20.643 } 00:17:20.643 ] 00:17:20.643 }, 00:17:20.643 { 00:17:20.643 "subsystem": "vmd", 00:17:20.643 "config": [] 00:17:20.643 }, 00:17:20.643 { 00:17:20.643 "subsystem": "accel", 00:17:20.643 "config": [ 00:17:20.643 { 00:17:20.643 "method": "accel_set_options", 00:17:20.643 "params": { 00:17:20.643 "buf_count": 2048, 00:17:20.643 "large_cache_size": 16, 00:17:20.643 "sequence_count": 2048, 00:17:20.643 "small_cache_size": 128, 00:17:20.643 "task_count": 2048 00:17:20.643 } 00:17:20.643 } 00:17:20.643 ] 00:17:20.643 }, 00:17:20.643 { 00:17:20.643 "subsystem": "bdev", 00:17:20.643 "config": [ 00:17:20.643 { 00:17:20.643 "method": "bdev_set_options", 00:17:20.643 "params": { 00:17:20.643 "bdev_auto_examine": true, 00:17:20.643 "bdev_io_cache_size": 256, 00:17:20.643 "bdev_io_pool_size": 65535, 00:17:20.643 "iobuf_large_cache_size": 16, 00:17:20.643 "iobuf_small_cache_size": 128 00:17:20.643 } 00:17:20.643 }, 00:17:20.643 { 00:17:20.643 "method": "bdev_raid_set_options", 00:17:20.643 "params": { 00:17:20.643 "process_window_size_kb": 1024 00:17:20.643 } 00:17:20.643 }, 00:17:20.643 { 00:17:20.643 "method": "bdev_iscsi_set_options", 00:17:20.643 "params": { 00:17:20.643 "timeout_sec": 30 00:17:20.643 } 00:17:20.643 }, 00:17:20.643 { 00:17:20.643 "method": "bdev_nvme_set_options", 00:17:20.643 "params": { 00:17:20.643 "action_on_timeout": "none", 00:17:20.643 "allow_accel_sequence": false, 00:17:20.643 "arbitration_burst": 0, 00:17:20.643 "bdev_retry_count": 3, 00:17:20.643 "ctrlr_loss_timeout_sec": 0, 00:17:20.643 "delay_cmd_submit": true, 00:17:20.643 "fast_io_fail_timeout_sec": 0, 00:17:20.643 "generate_uuids": false, 00:17:20.643 "high_priority_weight": 0, 00:17:20.643 "io_path_stat": false, 00:17:20.643 "io_queue_requests": 0, 00:17:20.643 "keep_alive_timeout_ms": 10000, 00:17:20.643 "low_priority_weight": 0, 00:17:20.643 "medium_priority_weight": 0, 00:17:20.643 "nvme_adminq_poll_period_us": 10000, 00:17:20.643 "nvme_ioq_poll_period_us": 0, 00:17:20.643 "reconnect_delay_sec": 0, 00:17:20.643 "timeout_admin_us": 0, 00:17:20.643 "timeout_us": 0, 00:17:20.643 "transport_ack_timeout": 0, 00:17:20.643 "transport_retry_count": 4, 00:17:20.643 "transport_tos": 0 00:17:20.643 } 00:17:20.643 }, 00:17:20.643 { 00:17:20.643 "method": "bdev_nvme_set_hotplug", 00:17:20.643 "params": { 00:17:20.643 "enable": false, 00:17:20.643 "period_us": 100000 00:17:20.643 } 00:17:20.643 }, 00:17:20.643 { 00:17:20.643 "method": "bdev_malloc_create", 00:17:20.643 "params": { 00:17:20.643 "block_size": 4096, 00:17:20.643 "name": "malloc0", 00:17:20.643 "num_blocks": 8192, 00:17:20.643 "optimal_io_boundary": 0, 00:17:20.643 "physical_block_size": 4096, 00:17:20.643 "uuid": "a3be7850-4a61-428c-9dfd-32ed33d4f343" 00:17:20.643 } 00:17:20.643 }, 00:17:20.643 { 00:17:20.643 "method": "bdev_wait_for_examine" 00:17:20.643 } 00:17:20.643 ] 00:17:20.643 }, 00:17:20.643 { 00:17:20.643 "subsystem": "nbd", 00:17:20.643 "config": [] 00:17:20.643 }, 00:17:20.643 { 00:17:20.643 "subsystem": "scheduler", 00:17:20.643 "config": [ 00:17:20.643 { 00:17:20.643 "method": "framework_set_scheduler", 00:17:20.643 "params": { 00:17:20.643 "name": "static" 00:17:20.643 } 00:17:20.643 } 00:17:20.643 ] 00:17:20.643 }, 00:17:20.643 { 00:17:20.643 "subsystem": "nvmf", 00:17:20.643 "config": [ 00:17:20.643 { 00:17:20.643 "method": "nvmf_set_config", 00:17:20.643 "params": { 00:17:20.643 "admin_cmd_passthru": { 00:17:20.643 "identify_ctrlr": false 00:17:20.643 }, 00:17:20.643 "discovery_filter": "match_any" 00:17:20.643 } 00:17:20.643 }, 00:17:20.643 { 00:17:20.643 "method": "nvmf_set_max_subsystems", 00:17:20.643 "params": { 00:17:20.643 "max_subsystems": 1024 00:17:20.643 } 00:17:20.643 }, 00:17:20.643 { 00:17:20.643 "method": "nvmf_set_crdt", 00:17:20.643 "params": { 00:17:20.643 "crdt1": 0, 00:17:20.643 "crdt2": 0, 00:17:20.643 "crdt3": 0 00:17:20.643 } 00:17:20.643 }, 00:17:20.643 { 00:17:20.643 "method": "nvmf_create_transport", 00:17:20.643 "params": { 00:17:20.643 "abort_timeout_sec": 1, 00:17:20.643 "buf_cache_size": 4294967295, 00:17:20.643 "c2h_success": false, 00:17:20.643 "dif_insert_or_strip": false, 00:17:20.643 "in_capsule_data_size": 4096, 00:17:20.643 "io_unit_size": 131072, 00:17:20.643 "max_aq_depth": 128, 00:17:20.643 "max_io_qpairs_per_ctrlr": 127, 00:17:20.643 "max_io_size": 131072, 00:17:20.643 "max_queue_depth": 128, 00:17:20.643 "num_shared_buffers": 511, 00:17:20.643 "sock_priority": 0, 00:17:20.643 "trtype": "TCP", 00:17:20.643 "zcopy": false 00:17:20.643 } 00:17:20.643 }, 00:17:20.643 { 00:17:20.643 "method": "nvmf_create_subsystem", 00:17:20.643 "params": { 00:17:20.643 "allow_any_host": false, 00:17:20.643 "ana_reporting": false, 00:17:20.643 "max_cntlid": 65519, 00:17:20.643 "max_namespaces": 10, 00:17:20.643 "min_cntlid": 1, 00:17:20.643 "model_number": "SPDK bdev Controller", 00:17:20.643 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:20.643 "serial_number": "SPDK00000000000001" 00:17:20.643 } 00:17:20.643 }, 00:17:20.643 { 00:17:20.643 "method": "nvmf_subsystem_add_host", 00:17:20.643 "params": { 00:17:20.643 "host": "nqn.2016-06.io.spdk:host1", 00:17:20.643 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:20.643 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:20.643 } 00:17:20.644 }, 00:17:20.644 { 00:17:20.644 "method": "nvmf_subsystem_add_ns", 00:17:20.644 "params": { 00:17:20.644 "namespace": { 00:17:20.644 "bdev_name": "malloc0", 00:17:20.644 "nguid": "A3BE78504A61428C9DFD32ED33D4F343", 00:17:20.644 "nsid": 1, 00:17:20.644 "uuid": "a3be7850-4a61-428c-9dfd-32ed33d4f343" 00:17:20.644 }, 00:17:20.644 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:17:20.644 } 00:17:20.644 }, 00:17:20.644 { 00:17:20.644 "method": "nvmf_subsystem_add_listener", 00:17:20.644 "params": { 00:17:20.644 "listen_address": { 00:17:20.644 "adrfam": "IPv4", 00:17:20.644 "traddr": "10.0.0.2", 00:17:20.644 "trsvcid": "4420", 00:17:20.644 "trtype": "TCP" 00:17:20.644 }, 00:17:20.644 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:20.644 "secure_channel": true 00:17:20.644 } 00:17:20.644 } 00:17:20.644 ] 00:17:20.644 } 00:17:20.644 ] 00:17:20.644 }' 00:17:20.644 16:36:57 -- nvmf/common.sh@469 -- # nvmfpid=89769 00:17:20.644 16:36:57 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:17:20.644 16:36:57 -- nvmf/common.sh@470 -- # waitforlisten 89769 00:17:20.644 16:36:57 -- common/autotest_common.sh@829 -- # '[' -z 89769 ']' 00:17:20.644 16:36:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:20.644 16:36:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:20.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:20.644 16:36:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:20.644 16:36:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:20.644 16:36:57 -- common/autotest_common.sh@10 -- # set +x 00:17:20.644 [2024-11-16 16:36:57.984391] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:20.644 [2024-11-16 16:36:57.984503] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:20.644 [2024-11-16 16:36:58.118967] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:20.903 [2024-11-16 16:36:58.170294] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:20.903 [2024-11-16 16:36:58.170476] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:20.903 [2024-11-16 16:36:58.170489] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:20.903 [2024-11-16 16:36:58.170497] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:20.903 [2024-11-16 16:36:58.170520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:20.903 [2024-11-16 16:36:58.378878] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:21.162 [2024-11-16 16:36:58.410833] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:21.162 [2024-11-16 16:36:58.411043] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:21.728 16:36:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:21.728 16:36:58 -- common/autotest_common.sh@862 -- # return 0 00:17:21.728 16:36:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:21.728 16:36:58 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:21.728 16:36:58 -- common/autotest_common.sh@10 -- # set +x 00:17:21.728 16:36:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:21.728 16:36:59 -- target/tls.sh@216 -- # bdevperf_pid=89813 00:17:21.728 16:36:59 -- target/tls.sh@217 -- # waitforlisten 89813 /var/tmp/bdevperf.sock 00:17:21.728 16:36:59 -- common/autotest_common.sh@829 -- # '[' -z 89813 ']' 00:17:21.728 16:36:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:21.728 16:36:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:21.728 16:36:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:21.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:21.728 16:36:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:21.728 16:36:59 -- common/autotest_common.sh@10 -- # set +x 00:17:21.728 16:36:59 -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:17:21.728 16:36:59 -- target/tls.sh@213 -- # echo '{ 00:17:21.728 "subsystems": [ 00:17:21.728 { 00:17:21.728 "subsystem": "iobuf", 00:17:21.728 "config": [ 00:17:21.728 { 00:17:21.728 "method": "iobuf_set_options", 00:17:21.728 "params": { 00:17:21.728 "large_bufsize": 135168, 00:17:21.728 "large_pool_count": 1024, 00:17:21.728 "small_bufsize": 8192, 00:17:21.728 "small_pool_count": 8192 00:17:21.728 } 00:17:21.728 } 00:17:21.728 ] 00:17:21.728 }, 00:17:21.728 { 00:17:21.728 "subsystem": "sock", 00:17:21.728 "config": [ 00:17:21.728 { 00:17:21.728 "method": "sock_impl_set_options", 00:17:21.728 "params": { 00:17:21.728 "enable_ktls": false, 00:17:21.728 "enable_placement_id": 0, 00:17:21.728 "enable_quickack": false, 00:17:21.728 "enable_recv_pipe": true, 00:17:21.728 "enable_zerocopy_send_client": false, 00:17:21.728 "enable_zerocopy_send_server": true, 00:17:21.728 "impl_name": "posix", 00:17:21.728 "recv_buf_size": 2097152, 00:17:21.728 "send_buf_size": 2097152, 00:17:21.728 "tls_version": 0, 00:17:21.728 "zerocopy_threshold": 0 00:17:21.728 } 00:17:21.728 }, 00:17:21.728 { 00:17:21.728 "method": "sock_impl_set_options", 00:17:21.728 "params": { 00:17:21.728 "enable_ktls": false, 00:17:21.728 "enable_placement_id": 0, 00:17:21.728 "enable_quickack": false, 00:17:21.728 "enable_recv_pipe": true, 00:17:21.728 "enable_zerocopy_send_client": false, 00:17:21.728 "enable_zerocopy_send_server": true, 00:17:21.728 "impl_name": "ssl", 00:17:21.728 "recv_buf_size": 4096, 00:17:21.728 "send_buf_size": 4096, 00:17:21.728 "tls_version": 0, 00:17:21.728 "zerocopy_threshold": 0 00:17:21.728 } 00:17:21.728 } 00:17:21.728 ] 00:17:21.728 }, 00:17:21.728 { 00:17:21.728 "subsystem": "vmd", 00:17:21.728 "config": [] 00:17:21.728 }, 00:17:21.728 { 00:17:21.728 "subsystem": "accel", 00:17:21.728 "config": [ 00:17:21.728 { 00:17:21.728 "method": "accel_set_options", 00:17:21.728 "params": { 00:17:21.728 "buf_count": 2048, 00:17:21.728 "large_cache_size": 16, 00:17:21.728 "sequence_count": 2048, 00:17:21.728 "small_cache_size": 128, 00:17:21.728 "task_count": 2048 00:17:21.728 } 00:17:21.728 } 00:17:21.728 ] 00:17:21.728 }, 00:17:21.728 { 00:17:21.728 "subsystem": "bdev", 00:17:21.728 "config": [ 00:17:21.728 { 00:17:21.728 "method": "bdev_set_options", 00:17:21.728 "params": { 00:17:21.728 "bdev_auto_examine": true, 00:17:21.728 "bdev_io_cache_size": 256, 00:17:21.728 "bdev_io_pool_size": 65535, 00:17:21.728 "iobuf_large_cache_size": 16, 00:17:21.728 "iobuf_small_cache_size": 128 00:17:21.728 } 00:17:21.728 }, 00:17:21.728 { 00:17:21.728 "method": "bdev_raid_set_options", 00:17:21.728 "params": { 00:17:21.729 "process_window_size_kb": 1024 00:17:21.729 } 00:17:21.729 }, 00:17:21.729 { 00:17:21.729 "method": "bdev_iscsi_set_options", 00:17:21.729 "params": { 00:17:21.729 "timeout_sec": 30 00:17:21.729 } 00:17:21.729 }, 00:17:21.729 { 00:17:21.729 "method": "bdev_nvme_set_options", 00:17:21.729 "params": { 00:17:21.729 "action_on_timeout": "none", 00:17:21.729 "allow_accel_sequence": false, 00:17:21.729 "arbitration_burst": 0, 00:17:21.729 "bdev_retry_count": 3, 00:17:21.729 "ctrlr_loss_timeout_sec": 0, 00:17:21.729 "delay_cmd_submit": true, 00:17:21.729 "fast_io_fail_timeout_sec": 0, 00:17:21.729 "generate_uuids": false, 00:17:21.729 "high_priority_weight": 0, 00:17:21.729 "io_path_stat": false, 00:17:21.729 "io_queue_requests": 512, 00:17:21.729 "keep_alive_timeout_ms": 10000, 00:17:21.729 "low_priority_weight": 0, 00:17:21.729 "medium_priority_weight": 0, 00:17:21.729 "nvme_adminq_poll_period_us": 10000, 00:17:21.729 "nvme_ioq_poll_period_us": 0, 00:17:21.729 "reconnect_delay_sec": 0, 00:17:21.729 "timeout_admin_us": 0, 00:17:21.729 "timeout_us": 0, 00:17:21.729 "transport_ack_timeout": 0, 00:17:21.729 "transport_retry_count": 4, 00:17:21.729 "transport_tos": 0 00:17:21.729 } 00:17:21.729 }, 00:17:21.729 { 00:17:21.729 "method": "bdev_nvme_attach_controller", 00:17:21.729 "params": { 00:17:21.729 "adrfam": "IPv4", 00:17:21.729 "ctrlr_loss_timeout_sec": 0, 00:17:21.729 "ddgst": false, 00:17:21.729 "fast_io_fail_timeout_sec": 0, 00:17:21.729 "hdgst": false, 00:17:21.729 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:21.729 "name": "TLSTEST", 00:17:21.729 "prchk_guard": false, 00:17:21.729 "prchk_reftag": false, 00:17:21.729 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:17:21.729 "reconnect_delay_sec": 0, 00:17:21.729 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:21.729 "traddr": "10.0.0.2", 00:17:21.729 "trsvcid": "4420", 00:17:21.729 "trtype": "TCP" 00:17:21.729 } 00:17:21.729 }, 00:17:21.729 { 00:17:21.729 "method": "bdev_nvme_set_hotplug", 00:17:21.729 "params": { 00:17:21.729 "enable": false, 00:17:21.729 "period_us": 100000 00:17:21.729 } 00:17:21.729 }, 00:17:21.729 { 00:17:21.729 "method": "bdev_wait_for_examine" 00:17:21.729 } 00:17:21.729 ] 00:17:21.729 }, 00:17:21.729 { 00:17:21.729 "subsystem": "nbd", 00:17:21.729 "config": [] 00:17:21.729 } 00:17:21.729 ] 00:17:21.729 }' 00:17:21.729 [2024-11-16 16:36:59.079817] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:21.729 [2024-11-16 16:36:59.079916] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89813 ] 00:17:21.987 [2024-11-16 16:36:59.221220] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:21.987 [2024-11-16 16:36:59.295804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:21.987 [2024-11-16 16:36:59.446156] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:22.586 16:37:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:22.586 16:37:00 -- common/autotest_common.sh@862 -- # return 0 00:17:22.586 16:37:00 -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:22.844 Running I/O for 10 seconds... 00:17:32.819 00:17:32.819 Latency(us) 00:17:32.819 [2024-11-16T16:37:10.310Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:32.819 [2024-11-16T16:37:10.310Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:32.819 Verification LBA range: start 0x0 length 0x2000 00:17:32.819 TLSTESTn1 : 10.02 5423.56 21.19 0.00 0.00 23557.24 5898.24 394645.88 00:17:32.819 [2024-11-16T16:37:10.310Z] =================================================================================================================== 00:17:32.819 [2024-11-16T16:37:10.310Z] Total : 5423.56 21.19 0.00 0.00 23557.24 5898.24 394645.88 00:17:32.819 0 00:17:32.819 16:37:10 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:32.819 16:37:10 -- target/tls.sh@223 -- # killprocess 89813 00:17:32.819 16:37:10 -- common/autotest_common.sh@936 -- # '[' -z 89813 ']' 00:17:32.819 16:37:10 -- common/autotest_common.sh@940 -- # kill -0 89813 00:17:32.819 16:37:10 -- common/autotest_common.sh@941 -- # uname 00:17:32.819 16:37:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:32.819 16:37:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89813 00:17:32.819 16:37:10 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:32.819 16:37:10 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:32.819 killing process with pid 89813 00:17:32.819 16:37:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89813' 00:17:32.819 16:37:10 -- common/autotest_common.sh@955 -- # kill 89813 00:17:32.819 16:37:10 -- common/autotest_common.sh@960 -- # wait 89813 00:17:32.819 Received shutdown signal, test time was about 10.000000 seconds 00:17:32.819 00:17:32.819 Latency(us) 00:17:32.819 [2024-11-16T16:37:10.310Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:32.819 [2024-11-16T16:37:10.310Z] =================================================================================================================== 00:17:32.819 [2024-11-16T16:37:10.310Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:33.078 16:37:10 -- target/tls.sh@224 -- # killprocess 89769 00:17:33.078 16:37:10 -- common/autotest_common.sh@936 -- # '[' -z 89769 ']' 00:17:33.078 16:37:10 -- common/autotest_common.sh@940 -- # kill -0 89769 00:17:33.078 16:37:10 -- common/autotest_common.sh@941 -- # uname 00:17:33.078 16:37:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:33.078 16:37:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89769 00:17:33.078 16:37:10 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:33.078 killing process with pid 89769 00:17:33.078 16:37:10 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:33.078 16:37:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89769' 00:17:33.078 16:37:10 -- common/autotest_common.sh@955 -- # kill 89769 00:17:33.078 16:37:10 -- common/autotest_common.sh@960 -- # wait 89769 00:17:33.336 16:37:10 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:17:33.336 16:37:10 -- target/tls.sh@227 -- # cleanup 00:17:33.336 16:37:10 -- target/tls.sh@15 -- # process_shm --id 0 00:17:33.336 16:37:10 -- common/autotest_common.sh@806 -- # type=--id 00:17:33.336 16:37:10 -- common/autotest_common.sh@807 -- # id=0 00:17:33.336 16:37:10 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:17:33.336 16:37:10 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:33.336 16:37:10 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:17:33.336 16:37:10 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:17:33.336 16:37:10 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:17:33.336 16:37:10 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:33.336 nvmf_trace.0 00:17:33.336 16:37:10 -- common/autotest_common.sh@821 -- # return 0 00:17:33.336 16:37:10 -- target/tls.sh@16 -- # killprocess 89813 00:17:33.336 16:37:10 -- common/autotest_common.sh@936 -- # '[' -z 89813 ']' 00:17:33.336 16:37:10 -- common/autotest_common.sh@940 -- # kill -0 89813 00:17:33.336 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (89813) - No such process 00:17:33.336 Process with pid 89813 is not found 00:17:33.336 16:37:10 -- common/autotest_common.sh@963 -- # echo 'Process with pid 89813 is not found' 00:17:33.336 16:37:10 -- target/tls.sh@17 -- # nvmftestfini 00:17:33.336 16:37:10 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:33.336 16:37:10 -- nvmf/common.sh@116 -- # sync 00:17:33.904 16:37:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:33.904 16:37:11 -- nvmf/common.sh@119 -- # set +e 00:17:33.904 16:37:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:33.904 16:37:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:33.904 rmmod nvme_tcp 00:17:33.904 rmmod nvme_fabrics 00:17:33.904 rmmod nvme_keyring 00:17:33.904 16:37:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:33.904 16:37:11 -- nvmf/common.sh@123 -- # set -e 00:17:33.904 16:37:11 -- nvmf/common.sh@124 -- # return 0 00:17:33.904 16:37:11 -- nvmf/common.sh@477 -- # '[' -n 89769 ']' 00:17:33.904 16:37:11 -- nvmf/common.sh@478 -- # killprocess 89769 00:17:33.904 16:37:11 -- common/autotest_common.sh@936 -- # '[' -z 89769 ']' 00:17:33.904 16:37:11 -- common/autotest_common.sh@940 -- # kill -0 89769 00:17:33.904 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (89769) - No such process 00:17:33.904 Process with pid 89769 is not found 00:17:33.904 16:37:11 -- common/autotest_common.sh@963 -- # echo 'Process with pid 89769 is not found' 00:17:33.904 16:37:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:33.904 16:37:11 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:33.904 16:37:11 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:33.904 16:37:11 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:33.904 16:37:11 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:33.904 16:37:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:33.904 16:37:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:33.904 16:37:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:33.904 16:37:11 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:33.904 16:37:11 -- target/tls.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:33.904 00:17:33.904 real 1m10.559s 00:17:33.904 user 1m43.883s 00:17:33.904 sys 0m26.631s 00:17:33.904 16:37:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:33.904 16:37:11 -- common/autotest_common.sh@10 -- # set +x 00:17:33.904 ************************************ 00:17:33.904 END TEST nvmf_tls 00:17:33.904 ************************************ 00:17:33.904 16:37:11 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:33.904 16:37:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:33.904 16:37:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:33.904 16:37:11 -- common/autotest_common.sh@10 -- # set +x 00:17:33.904 ************************************ 00:17:33.904 START TEST nvmf_fips 00:17:33.904 ************************************ 00:17:33.904 16:37:11 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:33.904 * Looking for test storage... 00:17:33.904 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:17:33.904 16:37:11 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:33.904 16:37:11 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:33.904 16:37:11 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:34.164 16:37:11 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:34.164 16:37:11 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:34.164 16:37:11 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:34.164 16:37:11 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:34.164 16:37:11 -- scripts/common.sh@335 -- # IFS=.-: 00:17:34.164 16:37:11 -- scripts/common.sh@335 -- # read -ra ver1 00:17:34.164 16:37:11 -- scripts/common.sh@336 -- # IFS=.-: 00:17:34.164 16:37:11 -- scripts/common.sh@336 -- # read -ra ver2 00:17:34.164 16:37:11 -- scripts/common.sh@337 -- # local 'op=<' 00:17:34.164 16:37:11 -- scripts/common.sh@339 -- # ver1_l=2 00:17:34.164 16:37:11 -- scripts/common.sh@340 -- # ver2_l=1 00:17:34.164 16:37:11 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:34.164 16:37:11 -- scripts/common.sh@343 -- # case "$op" in 00:17:34.164 16:37:11 -- scripts/common.sh@344 -- # : 1 00:17:34.164 16:37:11 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:34.164 16:37:11 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:34.164 16:37:11 -- scripts/common.sh@364 -- # decimal 1 00:17:34.164 16:37:11 -- scripts/common.sh@352 -- # local d=1 00:17:34.164 16:37:11 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:34.164 16:37:11 -- scripts/common.sh@354 -- # echo 1 00:17:34.164 16:37:11 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:34.164 16:37:11 -- scripts/common.sh@365 -- # decimal 2 00:17:34.164 16:37:11 -- scripts/common.sh@352 -- # local d=2 00:17:34.164 16:37:11 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:34.164 16:37:11 -- scripts/common.sh@354 -- # echo 2 00:17:34.164 16:37:11 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:34.164 16:37:11 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:34.164 16:37:11 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:34.164 16:37:11 -- scripts/common.sh@367 -- # return 0 00:17:34.164 16:37:11 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:34.164 16:37:11 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:34.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.164 --rc genhtml_branch_coverage=1 00:17:34.165 --rc genhtml_function_coverage=1 00:17:34.165 --rc genhtml_legend=1 00:17:34.165 --rc geninfo_all_blocks=1 00:17:34.165 --rc geninfo_unexecuted_blocks=1 00:17:34.165 00:17:34.165 ' 00:17:34.165 16:37:11 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:34.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.165 --rc genhtml_branch_coverage=1 00:17:34.165 --rc genhtml_function_coverage=1 00:17:34.165 --rc genhtml_legend=1 00:17:34.165 --rc geninfo_all_blocks=1 00:17:34.165 --rc geninfo_unexecuted_blocks=1 00:17:34.165 00:17:34.165 ' 00:17:34.165 16:37:11 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:34.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.165 --rc genhtml_branch_coverage=1 00:17:34.165 --rc genhtml_function_coverage=1 00:17:34.165 --rc genhtml_legend=1 00:17:34.165 --rc geninfo_all_blocks=1 00:17:34.165 --rc geninfo_unexecuted_blocks=1 00:17:34.165 00:17:34.165 ' 00:17:34.165 16:37:11 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:34.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.165 --rc genhtml_branch_coverage=1 00:17:34.165 --rc genhtml_function_coverage=1 00:17:34.165 --rc genhtml_legend=1 00:17:34.165 --rc geninfo_all_blocks=1 00:17:34.165 --rc geninfo_unexecuted_blocks=1 00:17:34.165 00:17:34.165 ' 00:17:34.165 16:37:11 -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:34.165 16:37:11 -- nvmf/common.sh@7 -- # uname -s 00:17:34.165 16:37:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:34.165 16:37:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:34.165 16:37:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:34.165 16:37:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:34.165 16:37:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:34.165 16:37:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:34.165 16:37:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:34.165 16:37:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:34.165 16:37:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:34.165 16:37:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:34.165 16:37:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:17:34.165 16:37:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:17:34.165 16:37:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:34.165 16:37:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:34.165 16:37:11 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:34.165 16:37:11 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:34.165 16:37:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:34.165 16:37:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:34.165 16:37:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:34.165 16:37:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.165 16:37:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.165 16:37:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.165 16:37:11 -- paths/export.sh@5 -- # export PATH 00:17:34.165 16:37:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.165 16:37:11 -- nvmf/common.sh@46 -- # : 0 00:17:34.165 16:37:11 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:34.165 16:37:11 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:34.165 16:37:11 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:34.165 16:37:11 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:34.165 16:37:11 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:34.165 16:37:11 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:34.165 16:37:11 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:34.165 16:37:11 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:34.165 16:37:11 -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:34.165 16:37:11 -- fips/fips.sh@89 -- # check_openssl_version 00:17:34.165 16:37:11 -- fips/fips.sh@83 -- # local target=3.0.0 00:17:34.165 16:37:11 -- fips/fips.sh@85 -- # openssl version 00:17:34.165 16:37:11 -- fips/fips.sh@85 -- # awk '{print $2}' 00:17:34.165 16:37:11 -- fips/fips.sh@85 -- # ge 3.1.1 3.0.0 00:17:34.165 16:37:11 -- scripts/common.sh@375 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:17:34.165 16:37:11 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:34.165 16:37:11 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:34.165 16:37:11 -- scripts/common.sh@335 -- # IFS=.-: 00:17:34.165 16:37:11 -- scripts/common.sh@335 -- # read -ra ver1 00:17:34.165 16:37:11 -- scripts/common.sh@336 -- # IFS=.-: 00:17:34.165 16:37:11 -- scripts/common.sh@336 -- # read -ra ver2 00:17:34.165 16:37:11 -- scripts/common.sh@337 -- # local 'op=>=' 00:17:34.165 16:37:11 -- scripts/common.sh@339 -- # ver1_l=3 00:17:34.165 16:37:11 -- scripts/common.sh@340 -- # ver2_l=3 00:17:34.165 16:37:11 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:34.165 16:37:11 -- scripts/common.sh@343 -- # case "$op" in 00:17:34.165 16:37:11 -- scripts/common.sh@347 -- # : 1 00:17:34.165 16:37:11 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:34.165 16:37:11 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:34.165 16:37:11 -- scripts/common.sh@364 -- # decimal 3 00:17:34.165 16:37:11 -- scripts/common.sh@352 -- # local d=3 00:17:34.165 16:37:11 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:34.165 16:37:11 -- scripts/common.sh@354 -- # echo 3 00:17:34.165 16:37:11 -- scripts/common.sh@364 -- # ver1[v]=3 00:17:34.165 16:37:11 -- scripts/common.sh@365 -- # decimal 3 00:17:34.165 16:37:11 -- scripts/common.sh@352 -- # local d=3 00:17:34.165 16:37:11 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:34.165 16:37:11 -- scripts/common.sh@354 -- # echo 3 00:17:34.165 16:37:11 -- scripts/common.sh@365 -- # ver2[v]=3 00:17:34.165 16:37:11 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:34.165 16:37:11 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:34.165 16:37:11 -- scripts/common.sh@363 -- # (( v++ )) 00:17:34.165 16:37:11 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:34.165 16:37:11 -- scripts/common.sh@364 -- # decimal 1 00:17:34.165 16:37:11 -- scripts/common.sh@352 -- # local d=1 00:17:34.165 16:37:11 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:34.165 16:37:11 -- scripts/common.sh@354 -- # echo 1 00:17:34.165 16:37:11 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:34.165 16:37:11 -- scripts/common.sh@365 -- # decimal 0 00:17:34.165 16:37:11 -- scripts/common.sh@352 -- # local d=0 00:17:34.165 16:37:11 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:34.165 16:37:11 -- scripts/common.sh@354 -- # echo 0 00:17:34.165 16:37:11 -- scripts/common.sh@365 -- # ver2[v]=0 00:17:34.165 16:37:11 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:34.165 16:37:11 -- scripts/common.sh@366 -- # return 0 00:17:34.165 16:37:11 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:17:34.165 16:37:11 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:17:34.165 16:37:11 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:17:34.165 16:37:11 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:17:34.165 16:37:11 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:17:34.165 16:37:11 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:17:34.165 16:37:11 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:17:34.165 16:37:11 -- fips/fips.sh@113 -- # build_openssl_config 00:17:34.165 16:37:11 -- fips/fips.sh@37 -- # cat 00:17:34.165 16:37:11 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:17:34.165 16:37:11 -- fips/fips.sh@58 -- # cat - 00:17:34.165 16:37:11 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:17:34.165 16:37:11 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:17:34.165 16:37:11 -- fips/fips.sh@116 -- # mapfile -t providers 00:17:34.165 16:37:11 -- fips/fips.sh@116 -- # openssl list -providers 00:17:34.165 16:37:11 -- fips/fips.sh@116 -- # grep name 00:17:34.165 16:37:11 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:17:34.165 16:37:11 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:17:34.165 16:37:11 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:17:34.165 16:37:11 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:17:34.165 16:37:11 -- fips/fips.sh@127 -- # : 00:17:34.165 16:37:11 -- common/autotest_common.sh@650 -- # local es=0 00:17:34.165 16:37:11 -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:17:34.165 16:37:11 -- common/autotest_common.sh@638 -- # local arg=openssl 00:17:34.165 16:37:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:34.165 16:37:11 -- common/autotest_common.sh@642 -- # type -t openssl 00:17:34.165 16:37:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:34.165 16:37:11 -- common/autotest_common.sh@644 -- # type -P openssl 00:17:34.165 16:37:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:34.166 16:37:11 -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:17:34.166 16:37:11 -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:17:34.166 16:37:11 -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:17:34.166 Error setting digest 00:17:34.166 40027902A37F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:17:34.166 40027902A37F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:17:34.166 16:37:11 -- common/autotest_common.sh@653 -- # es=1 00:17:34.166 16:37:11 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:34.166 16:37:11 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:34.166 16:37:11 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:34.166 16:37:11 -- fips/fips.sh@130 -- # nvmftestinit 00:17:34.166 16:37:11 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:34.166 16:37:11 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:34.166 16:37:11 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:34.166 16:37:11 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:34.166 16:37:11 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:34.166 16:37:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:34.166 16:37:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:34.166 16:37:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:34.166 16:37:11 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:34.166 16:37:11 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:34.166 16:37:11 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:34.166 16:37:11 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:34.166 16:37:11 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:34.166 16:37:11 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:34.166 16:37:11 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:34.166 16:37:11 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:34.166 16:37:11 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:34.166 16:37:11 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:34.166 16:37:11 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:34.166 16:37:11 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:34.166 16:37:11 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:34.166 16:37:11 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:34.166 16:37:11 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:34.166 16:37:11 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:34.166 16:37:11 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:34.166 16:37:11 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:34.166 16:37:11 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:34.166 16:37:11 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:34.425 Cannot find device "nvmf_tgt_br" 00:17:34.425 16:37:11 -- nvmf/common.sh@154 -- # true 00:17:34.425 16:37:11 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:34.425 Cannot find device "nvmf_tgt_br2" 00:17:34.425 16:37:11 -- nvmf/common.sh@155 -- # true 00:17:34.425 16:37:11 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:34.425 16:37:11 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:34.425 Cannot find device "nvmf_tgt_br" 00:17:34.425 16:37:11 -- nvmf/common.sh@157 -- # true 00:17:34.425 16:37:11 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:34.425 Cannot find device "nvmf_tgt_br2" 00:17:34.425 16:37:11 -- nvmf/common.sh@158 -- # true 00:17:34.425 16:37:11 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:34.425 16:37:11 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:34.425 16:37:11 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:34.425 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:34.425 16:37:11 -- nvmf/common.sh@161 -- # true 00:17:34.425 16:37:11 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:34.425 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:34.425 16:37:11 -- nvmf/common.sh@162 -- # true 00:17:34.425 16:37:11 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:34.425 16:37:11 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:34.425 16:37:11 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:34.425 16:37:11 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:34.425 16:37:11 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:34.425 16:37:11 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:34.425 16:37:11 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:34.425 16:37:11 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:34.425 16:37:11 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:34.425 16:37:11 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:34.425 16:37:11 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:34.425 16:37:11 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:34.425 16:37:11 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:34.425 16:37:11 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:34.425 16:37:11 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:34.425 16:37:11 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:34.425 16:37:11 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:34.425 16:37:11 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:34.425 16:37:11 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:34.425 16:37:11 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:34.683 16:37:11 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:34.684 16:37:11 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:34.684 16:37:11 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:34.684 16:37:11 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:34.684 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:34.684 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:17:34.684 00:17:34.684 --- 10.0.0.2 ping statistics --- 00:17:34.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:34.684 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:17:34.684 16:37:11 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:34.684 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:34.684 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:17:34.684 00:17:34.684 --- 10.0.0.3 ping statistics --- 00:17:34.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:34.684 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:17:34.684 16:37:11 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:34.684 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:34.684 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:17:34.684 00:17:34.684 --- 10.0.0.1 ping statistics --- 00:17:34.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:34.684 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:17:34.684 16:37:11 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:34.684 16:37:11 -- nvmf/common.sh@421 -- # return 0 00:17:34.684 16:37:11 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:34.684 16:37:11 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:34.684 16:37:11 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:34.684 16:37:11 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:34.684 16:37:11 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:34.684 16:37:11 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:34.684 16:37:11 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:34.684 16:37:11 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:17:34.684 16:37:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:34.684 16:37:11 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:34.684 16:37:11 -- common/autotest_common.sh@10 -- # set +x 00:17:34.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:34.684 16:37:11 -- nvmf/common.sh@469 -- # nvmfpid=90183 00:17:34.684 16:37:11 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:34.684 16:37:11 -- nvmf/common.sh@470 -- # waitforlisten 90183 00:17:34.684 16:37:11 -- common/autotest_common.sh@829 -- # '[' -z 90183 ']' 00:17:34.684 16:37:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:34.684 16:37:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:34.684 16:37:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:34.684 16:37:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:34.684 16:37:11 -- common/autotest_common.sh@10 -- # set +x 00:17:34.684 [2024-11-16 16:37:12.060299] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:34.684 [2024-11-16 16:37:12.060387] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:34.943 [2024-11-16 16:37:12.202195] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:34.943 [2024-11-16 16:37:12.273889] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:34.943 [2024-11-16 16:37:12.274072] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:34.943 [2024-11-16 16:37:12.274090] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:34.943 [2024-11-16 16:37:12.274103] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:34.943 [2024-11-16 16:37:12.274144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:35.878 16:37:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:35.878 16:37:13 -- common/autotest_common.sh@862 -- # return 0 00:17:35.878 16:37:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:35.878 16:37:13 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:35.878 16:37:13 -- common/autotest_common.sh@10 -- # set +x 00:17:35.878 16:37:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:35.878 16:37:13 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:17:35.879 16:37:13 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:35.879 16:37:13 -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:35.879 16:37:13 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:35.879 16:37:13 -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:35.879 16:37:13 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:35.879 16:37:13 -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:35.879 16:37:13 -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:36.138 [2024-11-16 16:37:13.387592] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:36.138 [2024-11-16 16:37:13.403582] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:36.138 [2024-11-16 16:37:13.403755] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:36.138 malloc0 00:17:36.138 16:37:13 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:36.138 16:37:13 -- fips/fips.sh@147 -- # bdevperf_pid=90241 00:17:36.138 16:37:13 -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:36.138 16:37:13 -- fips/fips.sh@148 -- # waitforlisten 90241 /var/tmp/bdevperf.sock 00:17:36.138 16:37:13 -- common/autotest_common.sh@829 -- # '[' -z 90241 ']' 00:17:36.138 16:37:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:36.138 16:37:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:36.138 16:37:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:36.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:36.138 16:37:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:36.138 16:37:13 -- common/autotest_common.sh@10 -- # set +x 00:17:36.138 [2024-11-16 16:37:13.540718] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:36.138 [2024-11-16 16:37:13.540810] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90241 ] 00:17:36.396 [2024-11-16 16:37:13.680538] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:36.396 [2024-11-16 16:37:13.752455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:37.331 16:37:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:37.331 16:37:14 -- common/autotest_common.sh@862 -- # return 0 00:17:37.331 16:37:14 -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:37.331 [2024-11-16 16:37:14.763219] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:37.589 TLSTESTn1 00:17:37.589 16:37:14 -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:37.589 Running I/O for 10 seconds... 00:17:47.564 00:17:47.564 Latency(us) 00:17:47.564 [2024-11-16T16:37:25.055Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:47.564 [2024-11-16T16:37:25.055Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:47.564 Verification LBA range: start 0x0 length 0x2000 00:17:47.564 TLSTESTn1 : 10.01 5926.64 23.15 0.00 0.00 21569.24 2487.39 35985.22 00:17:47.564 [2024-11-16T16:37:25.055Z] =================================================================================================================== 00:17:47.564 [2024-11-16T16:37:25.055Z] Total : 5926.64 23.15 0.00 0.00 21569.24 2487.39 35985.22 00:17:47.564 0 00:17:47.564 16:37:24 -- fips/fips.sh@1 -- # cleanup 00:17:47.564 16:37:24 -- fips/fips.sh@15 -- # process_shm --id 0 00:17:47.564 16:37:24 -- common/autotest_common.sh@806 -- # type=--id 00:17:47.564 16:37:24 -- common/autotest_common.sh@807 -- # id=0 00:17:47.564 16:37:24 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:17:47.564 16:37:24 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:47.564 16:37:24 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:17:47.564 16:37:24 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:17:47.564 16:37:24 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:17:47.564 16:37:24 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:47.564 nvmf_trace.0 00:17:47.823 16:37:25 -- common/autotest_common.sh@821 -- # return 0 00:17:47.823 16:37:25 -- fips/fips.sh@16 -- # killprocess 90241 00:17:47.823 16:37:25 -- common/autotest_common.sh@936 -- # '[' -z 90241 ']' 00:17:47.823 16:37:25 -- common/autotest_common.sh@940 -- # kill -0 90241 00:17:47.823 16:37:25 -- common/autotest_common.sh@941 -- # uname 00:17:47.823 16:37:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:47.823 16:37:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90241 00:17:47.823 16:37:25 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:47.823 16:37:25 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:47.823 killing process with pid 90241 00:17:47.823 16:37:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90241' 00:17:47.823 16:37:25 -- common/autotest_common.sh@955 -- # kill 90241 00:17:47.823 Received shutdown signal, test time was about 10.000000 seconds 00:17:47.823 00:17:47.823 Latency(us) 00:17:47.823 [2024-11-16T16:37:25.314Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:47.823 [2024-11-16T16:37:25.314Z] =================================================================================================================== 00:17:47.823 [2024-11-16T16:37:25.314Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:47.823 16:37:25 -- common/autotest_common.sh@960 -- # wait 90241 00:17:48.082 16:37:25 -- fips/fips.sh@17 -- # nvmftestfini 00:17:48.082 16:37:25 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:48.082 16:37:25 -- nvmf/common.sh@116 -- # sync 00:17:48.082 16:37:25 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:48.082 16:37:25 -- nvmf/common.sh@119 -- # set +e 00:17:48.082 16:37:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:48.082 16:37:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:48.082 rmmod nvme_tcp 00:17:48.082 rmmod nvme_fabrics 00:17:48.341 rmmod nvme_keyring 00:17:48.341 16:37:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:48.341 16:37:25 -- nvmf/common.sh@123 -- # set -e 00:17:48.341 16:37:25 -- nvmf/common.sh@124 -- # return 0 00:17:48.341 16:37:25 -- nvmf/common.sh@477 -- # '[' -n 90183 ']' 00:17:48.341 16:37:25 -- nvmf/common.sh@478 -- # killprocess 90183 00:17:48.341 16:37:25 -- common/autotest_common.sh@936 -- # '[' -z 90183 ']' 00:17:48.341 16:37:25 -- common/autotest_common.sh@940 -- # kill -0 90183 00:17:48.341 16:37:25 -- common/autotest_common.sh@941 -- # uname 00:17:48.341 16:37:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:48.341 16:37:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90183 00:17:48.341 16:37:25 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:48.341 killing process with pid 90183 00:17:48.341 16:37:25 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:48.341 16:37:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90183' 00:17:48.341 16:37:25 -- common/autotest_common.sh@955 -- # kill 90183 00:17:48.341 16:37:25 -- common/autotest_common.sh@960 -- # wait 90183 00:17:48.341 16:37:25 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:48.341 16:37:25 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:48.341 16:37:25 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:48.341 16:37:25 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:48.341 16:37:25 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:48.342 16:37:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:48.342 16:37:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:48.342 16:37:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:48.600 16:37:25 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:48.600 16:37:25 -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:48.600 00:17:48.600 real 0m14.605s 00:17:48.600 user 0m18.944s 00:17:48.600 sys 0m6.370s 00:17:48.600 16:37:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:48.600 16:37:25 -- common/autotest_common.sh@10 -- # set +x 00:17:48.600 ************************************ 00:17:48.600 END TEST nvmf_fips 00:17:48.600 ************************************ 00:17:48.600 16:37:25 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:17:48.600 16:37:25 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:17:48.600 16:37:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:48.600 16:37:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:48.600 16:37:25 -- common/autotest_common.sh@10 -- # set +x 00:17:48.600 ************************************ 00:17:48.600 START TEST nvmf_fuzz 00:17:48.600 ************************************ 00:17:48.600 16:37:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:17:48.600 * Looking for test storage... 00:17:48.600 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:48.600 16:37:25 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:48.600 16:37:25 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:48.600 16:37:25 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:48.600 16:37:26 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:48.600 16:37:26 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:48.600 16:37:26 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:48.600 16:37:26 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:48.600 16:37:26 -- scripts/common.sh@335 -- # IFS=.-: 00:17:48.600 16:37:26 -- scripts/common.sh@335 -- # read -ra ver1 00:17:48.600 16:37:26 -- scripts/common.sh@336 -- # IFS=.-: 00:17:48.600 16:37:26 -- scripts/common.sh@336 -- # read -ra ver2 00:17:48.600 16:37:26 -- scripts/common.sh@337 -- # local 'op=<' 00:17:48.600 16:37:26 -- scripts/common.sh@339 -- # ver1_l=2 00:17:48.600 16:37:26 -- scripts/common.sh@340 -- # ver2_l=1 00:17:48.600 16:37:26 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:48.600 16:37:26 -- scripts/common.sh@343 -- # case "$op" in 00:17:48.600 16:37:26 -- scripts/common.sh@344 -- # : 1 00:17:48.600 16:37:26 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:48.600 16:37:26 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:48.600 16:37:26 -- scripts/common.sh@364 -- # decimal 1 00:17:48.600 16:37:26 -- scripts/common.sh@352 -- # local d=1 00:17:48.600 16:37:26 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:48.600 16:37:26 -- scripts/common.sh@354 -- # echo 1 00:17:48.600 16:37:26 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:48.600 16:37:26 -- scripts/common.sh@365 -- # decimal 2 00:17:48.600 16:37:26 -- scripts/common.sh@352 -- # local d=2 00:17:48.600 16:37:26 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:48.859 16:37:26 -- scripts/common.sh@354 -- # echo 2 00:17:48.859 16:37:26 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:48.859 16:37:26 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:48.859 16:37:26 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:48.859 16:37:26 -- scripts/common.sh@367 -- # return 0 00:17:48.859 16:37:26 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:48.859 16:37:26 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:48.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:48.860 --rc genhtml_branch_coverage=1 00:17:48.860 --rc genhtml_function_coverage=1 00:17:48.860 --rc genhtml_legend=1 00:17:48.860 --rc geninfo_all_blocks=1 00:17:48.860 --rc geninfo_unexecuted_blocks=1 00:17:48.860 00:17:48.860 ' 00:17:48.860 16:37:26 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:48.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:48.860 --rc genhtml_branch_coverage=1 00:17:48.860 --rc genhtml_function_coverage=1 00:17:48.860 --rc genhtml_legend=1 00:17:48.860 --rc geninfo_all_blocks=1 00:17:48.860 --rc geninfo_unexecuted_blocks=1 00:17:48.860 00:17:48.860 ' 00:17:48.860 16:37:26 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:48.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:48.860 --rc genhtml_branch_coverage=1 00:17:48.860 --rc genhtml_function_coverage=1 00:17:48.860 --rc genhtml_legend=1 00:17:48.860 --rc geninfo_all_blocks=1 00:17:48.860 --rc geninfo_unexecuted_blocks=1 00:17:48.860 00:17:48.860 ' 00:17:48.860 16:37:26 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:48.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:48.860 --rc genhtml_branch_coverage=1 00:17:48.860 --rc genhtml_function_coverage=1 00:17:48.860 --rc genhtml_legend=1 00:17:48.860 --rc geninfo_all_blocks=1 00:17:48.860 --rc geninfo_unexecuted_blocks=1 00:17:48.860 00:17:48.860 ' 00:17:48.860 16:37:26 -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:48.860 16:37:26 -- nvmf/common.sh@7 -- # uname -s 00:17:48.860 16:37:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:48.860 16:37:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:48.860 16:37:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:48.860 16:37:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:48.860 16:37:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:48.860 16:37:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:48.860 16:37:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:48.860 16:37:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:48.860 16:37:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:48.860 16:37:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:48.860 16:37:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:17:48.860 16:37:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:17:48.860 16:37:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:48.860 16:37:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:48.860 16:37:26 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:48.860 16:37:26 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:48.860 16:37:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:48.860 16:37:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:48.860 16:37:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:48.860 16:37:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.860 16:37:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.860 16:37:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.860 16:37:26 -- paths/export.sh@5 -- # export PATH 00:17:48.860 16:37:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.860 16:37:26 -- nvmf/common.sh@46 -- # : 0 00:17:48.860 16:37:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:48.860 16:37:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:48.860 16:37:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:48.860 16:37:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:48.860 16:37:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:48.860 16:37:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:48.860 16:37:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:48.860 16:37:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:48.860 16:37:26 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:17:48.860 16:37:26 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:48.860 16:37:26 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:48.860 16:37:26 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:48.860 16:37:26 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:48.860 16:37:26 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:48.860 16:37:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:48.860 16:37:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:48.860 16:37:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:48.860 16:37:26 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:48.860 16:37:26 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:48.860 16:37:26 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:48.860 16:37:26 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:48.860 16:37:26 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:48.860 16:37:26 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:48.860 16:37:26 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:48.860 16:37:26 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:48.860 16:37:26 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:48.860 16:37:26 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:48.860 16:37:26 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:48.860 16:37:26 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:48.860 16:37:26 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:48.860 16:37:26 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:48.860 16:37:26 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:48.860 16:37:26 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:48.860 16:37:26 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:48.860 16:37:26 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:48.860 16:37:26 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:48.860 16:37:26 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:48.860 Cannot find device "nvmf_tgt_br" 00:17:48.860 16:37:26 -- nvmf/common.sh@154 -- # true 00:17:48.860 16:37:26 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:48.860 Cannot find device "nvmf_tgt_br2" 00:17:48.860 16:37:26 -- nvmf/common.sh@155 -- # true 00:17:48.860 16:37:26 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:48.860 16:37:26 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:48.860 Cannot find device "nvmf_tgt_br" 00:17:48.860 16:37:26 -- nvmf/common.sh@157 -- # true 00:17:48.860 16:37:26 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:48.860 Cannot find device "nvmf_tgt_br2" 00:17:48.860 16:37:26 -- nvmf/common.sh@158 -- # true 00:17:48.860 16:37:26 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:48.860 16:37:26 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:48.860 16:37:26 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:48.860 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:48.860 16:37:26 -- nvmf/common.sh@161 -- # true 00:17:48.860 16:37:26 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:48.860 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:48.860 16:37:26 -- nvmf/common.sh@162 -- # true 00:17:48.860 16:37:26 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:48.860 16:37:26 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:48.860 16:37:26 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:48.860 16:37:26 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:48.860 16:37:26 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:48.860 16:37:26 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:48.860 16:37:26 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:48.860 16:37:26 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:48.860 16:37:26 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:48.861 16:37:26 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:48.861 16:37:26 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:48.861 16:37:26 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:48.861 16:37:26 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:48.861 16:37:26 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:48.861 16:37:26 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:48.861 16:37:26 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:48.861 16:37:26 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:48.861 16:37:26 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:48.861 16:37:26 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:49.119 16:37:26 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:49.120 16:37:26 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:49.120 16:37:26 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:49.120 16:37:26 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:49.120 16:37:26 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:49.120 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:49.120 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.147 ms 00:17:49.120 00:17:49.120 --- 10.0.0.2 ping statistics --- 00:17:49.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:49.120 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:17:49.120 16:37:26 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:49.120 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:49.120 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:17:49.120 00:17:49.120 --- 10.0.0.3 ping statistics --- 00:17:49.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:49.120 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:17:49.120 16:37:26 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:49.120 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:49.120 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:17:49.120 00:17:49.120 --- 10.0.0.1 ping statistics --- 00:17:49.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:49.120 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:17:49.120 16:37:26 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:49.120 16:37:26 -- nvmf/common.sh@421 -- # return 0 00:17:49.120 16:37:26 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:49.120 16:37:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:49.120 16:37:26 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:49.120 16:37:26 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:49.120 16:37:26 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:49.120 16:37:26 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:49.120 16:37:26 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:49.120 16:37:26 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=90599 00:17:49.120 16:37:26 -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:49.120 16:37:26 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:49.120 16:37:26 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 90599 00:17:49.120 16:37:26 -- common/autotest_common.sh@829 -- # '[' -z 90599 ']' 00:17:49.120 16:37:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:49.120 16:37:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:49.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:49.120 16:37:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:49.120 16:37:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:49.120 16:37:26 -- common/autotest_common.sh@10 -- # set +x 00:17:50.055 16:37:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:50.055 16:37:27 -- common/autotest_common.sh@862 -- # return 0 00:17:50.055 16:37:27 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:50.055 16:37:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.055 16:37:27 -- common/autotest_common.sh@10 -- # set +x 00:17:50.055 16:37:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.055 16:37:27 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:17:50.055 16:37:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.055 16:37:27 -- common/autotest_common.sh@10 -- # set +x 00:17:50.313 Malloc0 00:17:50.314 16:37:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.314 16:37:27 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:50.314 16:37:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.314 16:37:27 -- common/autotest_common.sh@10 -- # set +x 00:17:50.314 16:37:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.314 16:37:27 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:50.314 16:37:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.314 16:37:27 -- common/autotest_common.sh@10 -- # set +x 00:17:50.314 16:37:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.314 16:37:27 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:50.314 16:37:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.314 16:37:27 -- common/autotest_common.sh@10 -- # set +x 00:17:50.314 16:37:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.314 16:37:27 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:17:50.314 16:37:27 -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:17:50.573 Shutting down the fuzz application 00:17:50.573 16:37:27 -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:17:50.833 Shutting down the fuzz application 00:17:50.833 16:37:28 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:50.833 16:37:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.833 16:37:28 -- common/autotest_common.sh@10 -- # set +x 00:17:50.833 16:37:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.833 16:37:28 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:17:50.833 16:37:28 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:17:50.833 16:37:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:50.833 16:37:28 -- nvmf/common.sh@116 -- # sync 00:17:50.833 16:37:28 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:50.833 16:37:28 -- nvmf/common.sh@119 -- # set +e 00:17:50.833 16:37:28 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:50.833 16:37:28 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:50.833 rmmod nvme_tcp 00:17:50.833 rmmod nvme_fabrics 00:17:50.833 rmmod nvme_keyring 00:17:50.833 16:37:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:50.833 16:37:28 -- nvmf/common.sh@123 -- # set -e 00:17:50.833 16:37:28 -- nvmf/common.sh@124 -- # return 0 00:17:50.833 16:37:28 -- nvmf/common.sh@477 -- # '[' -n 90599 ']' 00:17:50.833 16:37:28 -- nvmf/common.sh@478 -- # killprocess 90599 00:17:51.092 16:37:28 -- common/autotest_common.sh@936 -- # '[' -z 90599 ']' 00:17:51.092 16:37:28 -- common/autotest_common.sh@940 -- # kill -0 90599 00:17:51.092 16:37:28 -- common/autotest_common.sh@941 -- # uname 00:17:51.092 16:37:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:51.092 16:37:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90599 00:17:51.092 killing process with pid 90599 00:17:51.092 16:37:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:51.092 16:37:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:51.092 16:37:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90599' 00:17:51.092 16:37:28 -- common/autotest_common.sh@955 -- # kill 90599 00:17:51.092 16:37:28 -- common/autotest_common.sh@960 -- # wait 90599 00:17:51.092 16:37:28 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:51.092 16:37:28 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:51.092 16:37:28 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:51.092 16:37:28 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:51.092 16:37:28 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:51.092 16:37:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:51.092 16:37:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:51.092 16:37:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:51.350 16:37:28 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:51.350 16:37:28 -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:17:51.350 00:17:51.350 real 0m2.703s 00:17:51.350 user 0m2.814s 00:17:51.350 sys 0m0.679s 00:17:51.350 16:37:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:51.350 ************************************ 00:17:51.350 END TEST nvmf_fuzz 00:17:51.350 ************************************ 00:17:51.350 16:37:28 -- common/autotest_common.sh@10 -- # set +x 00:17:51.350 16:37:28 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:17:51.350 16:37:28 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:51.350 16:37:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:51.350 16:37:28 -- common/autotest_common.sh@10 -- # set +x 00:17:51.350 ************************************ 00:17:51.350 START TEST nvmf_multiconnection 00:17:51.350 ************************************ 00:17:51.350 16:37:28 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:17:51.350 * Looking for test storage... 00:17:51.350 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:51.350 16:37:28 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:51.350 16:37:28 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:51.350 16:37:28 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:51.610 16:37:28 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:51.610 16:37:28 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:51.610 16:37:28 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:51.610 16:37:28 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:51.610 16:37:28 -- scripts/common.sh@335 -- # IFS=.-: 00:17:51.610 16:37:28 -- scripts/common.sh@335 -- # read -ra ver1 00:17:51.610 16:37:28 -- scripts/common.sh@336 -- # IFS=.-: 00:17:51.610 16:37:28 -- scripts/common.sh@336 -- # read -ra ver2 00:17:51.610 16:37:28 -- scripts/common.sh@337 -- # local 'op=<' 00:17:51.610 16:37:28 -- scripts/common.sh@339 -- # ver1_l=2 00:17:51.610 16:37:28 -- scripts/common.sh@340 -- # ver2_l=1 00:17:51.610 16:37:28 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:51.610 16:37:28 -- scripts/common.sh@343 -- # case "$op" in 00:17:51.610 16:37:28 -- scripts/common.sh@344 -- # : 1 00:17:51.610 16:37:28 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:51.610 16:37:28 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:51.610 16:37:28 -- scripts/common.sh@364 -- # decimal 1 00:17:51.610 16:37:28 -- scripts/common.sh@352 -- # local d=1 00:17:51.610 16:37:28 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:51.610 16:37:28 -- scripts/common.sh@354 -- # echo 1 00:17:51.610 16:37:28 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:51.610 16:37:28 -- scripts/common.sh@365 -- # decimal 2 00:17:51.610 16:37:28 -- scripts/common.sh@352 -- # local d=2 00:17:51.610 16:37:28 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:51.610 16:37:28 -- scripts/common.sh@354 -- # echo 2 00:17:51.610 16:37:28 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:51.610 16:37:28 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:51.610 16:37:28 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:51.610 16:37:28 -- scripts/common.sh@367 -- # return 0 00:17:51.610 16:37:28 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:51.610 16:37:28 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:51.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.610 --rc genhtml_branch_coverage=1 00:17:51.610 --rc genhtml_function_coverage=1 00:17:51.610 --rc genhtml_legend=1 00:17:51.610 --rc geninfo_all_blocks=1 00:17:51.610 --rc geninfo_unexecuted_blocks=1 00:17:51.610 00:17:51.610 ' 00:17:51.610 16:37:28 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:51.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.610 --rc genhtml_branch_coverage=1 00:17:51.610 --rc genhtml_function_coverage=1 00:17:51.610 --rc genhtml_legend=1 00:17:51.610 --rc geninfo_all_blocks=1 00:17:51.610 --rc geninfo_unexecuted_blocks=1 00:17:51.610 00:17:51.610 ' 00:17:51.610 16:37:28 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:51.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.610 --rc genhtml_branch_coverage=1 00:17:51.610 --rc genhtml_function_coverage=1 00:17:51.610 --rc genhtml_legend=1 00:17:51.610 --rc geninfo_all_blocks=1 00:17:51.610 --rc geninfo_unexecuted_blocks=1 00:17:51.610 00:17:51.610 ' 00:17:51.610 16:37:28 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:51.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.610 --rc genhtml_branch_coverage=1 00:17:51.610 --rc genhtml_function_coverage=1 00:17:51.610 --rc genhtml_legend=1 00:17:51.610 --rc geninfo_all_blocks=1 00:17:51.610 --rc geninfo_unexecuted_blocks=1 00:17:51.610 00:17:51.610 ' 00:17:51.610 16:37:28 -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:51.610 16:37:28 -- nvmf/common.sh@7 -- # uname -s 00:17:51.610 16:37:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:51.610 16:37:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:51.610 16:37:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:51.610 16:37:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:51.610 16:37:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:51.610 16:37:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:51.610 16:37:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:51.610 16:37:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:51.610 16:37:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:51.610 16:37:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:51.610 16:37:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:17:51.610 16:37:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:17:51.610 16:37:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:51.610 16:37:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:51.610 16:37:28 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:51.610 16:37:28 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:51.610 16:37:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:51.610 16:37:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:51.610 16:37:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:51.610 16:37:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.610 16:37:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.610 16:37:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.610 16:37:28 -- paths/export.sh@5 -- # export PATH 00:17:51.610 16:37:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.610 16:37:28 -- nvmf/common.sh@46 -- # : 0 00:17:51.610 16:37:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:51.610 16:37:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:51.610 16:37:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:51.610 16:37:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:51.610 16:37:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:51.610 16:37:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:51.610 16:37:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:51.610 16:37:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:51.610 16:37:28 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:51.610 16:37:28 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:51.610 16:37:28 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:17:51.610 16:37:28 -- target/multiconnection.sh@16 -- # nvmftestinit 00:17:51.610 16:37:28 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:51.610 16:37:28 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:51.610 16:37:28 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:51.610 16:37:28 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:51.610 16:37:28 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:51.610 16:37:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:51.610 16:37:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:51.610 16:37:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:51.610 16:37:28 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:51.610 16:37:28 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:51.610 16:37:28 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:51.610 16:37:28 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:51.610 16:37:28 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:51.610 16:37:28 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:51.610 16:37:28 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:51.610 16:37:28 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:51.610 16:37:28 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:51.610 16:37:28 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:51.610 16:37:28 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:51.611 16:37:28 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:51.611 16:37:28 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:51.611 16:37:28 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:51.611 16:37:28 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:51.611 16:37:28 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:51.611 16:37:28 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:51.611 16:37:28 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:51.611 16:37:28 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:51.611 16:37:28 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:51.611 Cannot find device "nvmf_tgt_br" 00:17:51.611 16:37:28 -- nvmf/common.sh@154 -- # true 00:17:51.611 16:37:28 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:51.611 Cannot find device "nvmf_tgt_br2" 00:17:51.611 16:37:28 -- nvmf/common.sh@155 -- # true 00:17:51.611 16:37:28 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:51.611 16:37:28 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:51.611 Cannot find device "nvmf_tgt_br" 00:17:51.611 16:37:28 -- nvmf/common.sh@157 -- # true 00:17:51.611 16:37:28 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:51.611 Cannot find device "nvmf_tgt_br2" 00:17:51.611 16:37:28 -- nvmf/common.sh@158 -- # true 00:17:51.611 16:37:28 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:51.611 16:37:28 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:51.611 16:37:29 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:51.611 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:51.611 16:37:29 -- nvmf/common.sh@161 -- # true 00:17:51.611 16:37:29 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:51.611 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:51.611 16:37:29 -- nvmf/common.sh@162 -- # true 00:17:51.611 16:37:29 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:51.611 16:37:29 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:51.611 16:37:29 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:51.611 16:37:29 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:51.611 16:37:29 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:51.611 16:37:29 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:51.611 16:37:29 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:51.611 16:37:29 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:51.611 16:37:29 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:51.611 16:37:29 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:51.611 16:37:29 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:51.870 16:37:29 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:51.870 16:37:29 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:51.870 16:37:29 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:51.870 16:37:29 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:51.870 16:37:29 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:51.870 16:37:29 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:51.870 16:37:29 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:51.870 16:37:29 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:51.870 16:37:29 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:51.870 16:37:29 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:51.870 16:37:29 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:51.870 16:37:29 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:51.870 16:37:29 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:51.870 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:51.870 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:17:51.870 00:17:51.870 --- 10.0.0.2 ping statistics --- 00:17:51.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.870 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:17:51.870 16:37:29 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:51.870 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:51.870 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:17:51.870 00:17:51.870 --- 10.0.0.3 ping statistics --- 00:17:51.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.870 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:17:51.870 16:37:29 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:51.870 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:51.870 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:17:51.870 00:17:51.870 --- 10.0.0.1 ping statistics --- 00:17:51.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.870 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:17:51.870 16:37:29 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:51.870 16:37:29 -- nvmf/common.sh@421 -- # return 0 00:17:51.870 16:37:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:51.870 16:37:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:51.870 16:37:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:51.870 16:37:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:51.870 16:37:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:51.870 16:37:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:51.870 16:37:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:51.870 16:37:29 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:17:51.870 16:37:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:51.870 16:37:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:51.870 16:37:29 -- common/autotest_common.sh@10 -- # set +x 00:17:51.870 16:37:29 -- nvmf/common.sh@469 -- # nvmfpid=90812 00:17:51.870 16:37:29 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:51.870 16:37:29 -- nvmf/common.sh@470 -- # waitforlisten 90812 00:17:51.870 16:37:29 -- common/autotest_common.sh@829 -- # '[' -z 90812 ']' 00:17:51.870 16:37:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:51.870 16:37:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:51.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:51.870 16:37:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:51.870 16:37:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:51.870 16:37:29 -- common/autotest_common.sh@10 -- # set +x 00:17:51.870 [2024-11-16 16:37:29.273963] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:51.870 [2024-11-16 16:37:29.274048] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:52.130 [2024-11-16 16:37:29.414981] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:52.130 [2024-11-16 16:37:29.474529] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:52.130 [2024-11-16 16:37:29.474698] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:52.130 [2024-11-16 16:37:29.474711] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:52.130 [2024-11-16 16:37:29.474719] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:52.130 [2024-11-16 16:37:29.474896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:52.130 [2024-11-16 16:37:29.475765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:52.130 [2024-11-16 16:37:29.475946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:52.130 [2024-11-16 16:37:29.475955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:53.065 16:37:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:53.065 16:37:30 -- common/autotest_common.sh@862 -- # return 0 00:17:53.065 16:37:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:53.065 16:37:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:53.065 16:37:30 -- common/autotest_common.sh@10 -- # set +x 00:17:53.065 16:37:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:53.065 16:37:30 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:53.065 16:37:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.065 16:37:30 -- common/autotest_common.sh@10 -- # set +x 00:17:53.065 [2024-11-16 16:37:30.338815] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:53.065 16:37:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.065 16:37:30 -- target/multiconnection.sh@21 -- # seq 1 11 00:17:53.065 16:37:30 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:53.065 16:37:30 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:53.065 16:37:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.065 16:37:30 -- common/autotest_common.sh@10 -- # set +x 00:17:53.065 Malloc1 00:17:53.065 16:37:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.065 16:37:30 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:17:53.065 16:37:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.065 16:37:30 -- common/autotest_common.sh@10 -- # set +x 00:17:53.065 16:37:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.065 16:37:30 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:53.065 16:37:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.065 16:37:30 -- common/autotest_common.sh@10 -- # set +x 00:17:53.065 16:37:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.065 16:37:30 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:53.065 16:37:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.065 16:37:30 -- common/autotest_common.sh@10 -- # set +x 00:17:53.065 [2024-11-16 16:37:30.410318] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:53.065 16:37:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.065 16:37:30 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:53.066 16:37:30 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:17:53.066 16:37:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.066 16:37:30 -- common/autotest_common.sh@10 -- # set +x 00:17:53.066 Malloc2 00:17:53.066 16:37:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.066 16:37:30 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:17:53.066 16:37:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.066 16:37:30 -- common/autotest_common.sh@10 -- # set +x 00:17:53.066 16:37:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.066 16:37:30 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:17:53.066 16:37:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.066 16:37:30 -- common/autotest_common.sh@10 -- # set +x 00:17:53.066 16:37:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.066 16:37:30 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:17:53.066 16:37:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.066 16:37:30 -- common/autotest_common.sh@10 -- # set +x 00:17:53.066 16:37:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.066 16:37:30 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:53.066 16:37:30 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:17:53.066 16:37:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.066 16:37:30 -- common/autotest_common.sh@10 -- # set +x 00:17:53.066 Malloc3 00:17:53.066 16:37:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.066 16:37:30 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:17:53.066 16:37:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.066 16:37:30 -- common/autotest_common.sh@10 -- # set +x 00:17:53.066 16:37:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.066 16:37:30 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:17:53.066 16:37:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.066 16:37:30 -- common/autotest_common.sh@10 -- # set +x 00:17:53.066 16:37:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.066 16:37:30 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:17:53.066 16:37:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.066 16:37:30 -- common/autotest_common.sh@10 -- # set +x 00:17:53.066 16:37:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.066 16:37:30 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:53.066 16:37:30 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:17:53.066 16:37:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.066 16:37:30 -- common/autotest_common.sh@10 -- # set +x 00:17:53.066 Malloc4 00:17:53.066 16:37:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.066 16:37:30 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:17:53.066 16:37:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.066 16:37:30 -- common/autotest_common.sh@10 -- # set +x 00:17:53.066 16:37:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.066 16:37:30 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:17:53.066 16:37:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.066 16:37:30 -- common/autotest_common.sh@10 -- # set +x 00:17:53.325 16:37:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.325 16:37:30 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:17:53.325 16:37:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.325 16:37:30 -- common/autotest_common.sh@10 -- # set +x 00:17:53.325 16:37:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.325 16:37:30 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:53.325 16:37:30 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:17:53.325 16:37:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.325 16:37:30 -- common/autotest_common.sh@10 -- # set +x 00:17:53.325 Malloc5 00:17:53.325 16:37:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.325 16:37:30 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:17:53.325 16:37:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.325 16:37:30 -- common/autotest_common.sh@10 -- # set +x 00:17:53.325 16:37:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.325 16:37:30 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:17:53.325 16:37:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.325 16:37:30 -- common/autotest_common.sh@10 -- # set +x 00:17:53.325 16:37:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.325 16:37:30 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:17:53.325 16:37:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.325 16:37:30 -- common/autotest_common.sh@10 -- # set +x 00:17:53.325 16:37:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.325 16:37:30 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:53.325 16:37:30 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:17:53.325 16:37:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.325 16:37:30 -- common/autotest_common.sh@10 -- # set +x 00:17:53.325 Malloc6 00:17:53.325 16:37:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.325 16:37:30 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:17:53.325 16:37:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.325 16:37:30 -- common/autotest_common.sh@10 -- # set +x 00:17:53.325 16:37:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.325 16:37:30 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:17:53.325 16:37:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.325 16:37:30 -- common/autotest_common.sh@10 -- # set +x 00:17:53.325 16:37:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.325 16:37:30 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:17:53.325 16:37:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.325 16:37:30 -- common/autotest_common.sh@10 -- # set +x 00:17:53.325 16:37:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.325 16:37:30 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:53.325 16:37:30 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:17:53.325 16:37:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.325 16:37:30 -- common/autotest_common.sh@10 -- # set +x 00:17:53.325 Malloc7 00:17:53.325 16:37:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.325 16:37:30 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:17:53.325 16:37:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.325 16:37:30 -- common/autotest_common.sh@10 -- # set +x 00:17:53.325 16:37:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.325 16:37:30 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:17:53.325 16:37:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.325 16:37:30 -- common/autotest_common.sh@10 -- # set +x 00:17:53.325 16:37:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.325 16:37:30 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:17:53.325 16:37:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.325 16:37:30 -- common/autotest_common.sh@10 -- # set +x 00:17:53.325 16:37:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.325 16:37:30 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:53.325 16:37:30 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:17:53.325 16:37:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.325 16:37:30 -- common/autotest_common.sh@10 -- # set +x 00:17:53.325 Malloc8 00:17:53.325 16:37:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.325 16:37:30 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:17:53.325 16:37:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.325 16:37:30 -- common/autotest_common.sh@10 -- # set +x 00:17:53.325 16:37:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.325 16:37:30 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:17:53.325 16:37:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.325 16:37:30 -- common/autotest_common.sh@10 -- # set +x 00:17:53.325 16:37:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.325 16:37:30 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:17:53.325 16:37:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.325 16:37:30 -- common/autotest_common.sh@10 -- # set +x 00:17:53.325 16:37:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.325 16:37:30 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:53.325 16:37:30 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:17:53.326 16:37:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.326 16:37:30 -- common/autotest_common.sh@10 -- # set +x 00:17:53.326 Malloc9 00:17:53.326 16:37:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.326 16:37:30 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:17:53.326 16:37:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.326 16:37:30 -- common/autotest_common.sh@10 -- # set +x 00:17:53.326 16:37:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.326 16:37:30 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:17:53.326 16:37:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.326 16:37:30 -- common/autotest_common.sh@10 -- # set +x 00:17:53.585 16:37:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.585 16:37:30 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:17:53.585 16:37:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.585 16:37:30 -- common/autotest_common.sh@10 -- # set +x 00:17:53.585 16:37:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.585 16:37:30 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:53.585 16:37:30 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:17:53.585 16:37:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.585 16:37:30 -- common/autotest_common.sh@10 -- # set +x 00:17:53.585 Malloc10 00:17:53.585 16:37:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.585 16:37:30 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:17:53.585 16:37:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.585 16:37:30 -- common/autotest_common.sh@10 -- # set +x 00:17:53.585 16:37:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.585 16:37:30 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:17:53.585 16:37:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.585 16:37:30 -- common/autotest_common.sh@10 -- # set +x 00:17:53.585 16:37:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.585 16:37:30 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:17:53.585 16:37:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.585 16:37:30 -- common/autotest_common.sh@10 -- # set +x 00:17:53.585 16:37:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.585 16:37:30 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:53.585 16:37:30 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:17:53.585 16:37:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.585 16:37:30 -- common/autotest_common.sh@10 -- # set +x 00:17:53.585 Malloc11 00:17:53.585 16:37:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.585 16:37:30 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:17:53.585 16:37:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.585 16:37:30 -- common/autotest_common.sh@10 -- # set +x 00:17:53.585 16:37:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.585 16:37:30 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:17:53.585 16:37:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.585 16:37:30 -- common/autotest_common.sh@10 -- # set +x 00:17:53.585 16:37:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.585 16:37:30 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:17:53.585 16:37:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.585 16:37:30 -- common/autotest_common.sh@10 -- # set +x 00:17:53.585 16:37:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.585 16:37:30 -- target/multiconnection.sh@28 -- # seq 1 11 00:17:53.585 16:37:30 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:53.585 16:37:30 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 --hostid=dcaf3c85-349e-474a-91c8-b5dfcb47b007 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:53.844 16:37:31 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:17:53.844 16:37:31 -- common/autotest_common.sh@1187 -- # local i=0 00:17:53.844 16:37:31 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:17:53.844 16:37:31 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:17:53.844 16:37:31 -- common/autotest_common.sh@1194 -- # sleep 2 00:17:55.748 16:37:33 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:17:55.748 16:37:33 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:17:55.748 16:37:33 -- common/autotest_common.sh@1196 -- # grep -c SPDK1 00:17:55.748 16:37:33 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:17:55.748 16:37:33 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:17:55.748 16:37:33 -- common/autotest_common.sh@1197 -- # return 0 00:17:55.748 16:37:33 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:55.748 16:37:33 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 --hostid=dcaf3c85-349e-474a-91c8-b5dfcb47b007 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:17:56.007 16:37:33 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:17:56.007 16:37:33 -- common/autotest_common.sh@1187 -- # local i=0 00:17:56.007 16:37:33 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:17:56.007 16:37:33 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:17:56.007 16:37:33 -- common/autotest_common.sh@1194 -- # sleep 2 00:17:57.909 16:37:35 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:17:57.909 16:37:35 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:17:57.909 16:37:35 -- common/autotest_common.sh@1196 -- # grep -c SPDK2 00:17:57.909 16:37:35 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:17:57.909 16:37:35 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:17:57.909 16:37:35 -- common/autotest_common.sh@1197 -- # return 0 00:17:57.909 16:37:35 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:57.909 16:37:35 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 --hostid=dcaf3c85-349e-474a-91c8-b5dfcb47b007 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:17:58.168 16:37:35 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:17:58.168 16:37:35 -- common/autotest_common.sh@1187 -- # local i=0 00:17:58.168 16:37:35 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:17:58.168 16:37:35 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:17:58.168 16:37:35 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:00.070 16:37:37 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:00.070 16:37:37 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:00.070 16:37:37 -- common/autotest_common.sh@1196 -- # grep -c SPDK3 00:18:00.070 16:37:37 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:00.070 16:37:37 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:00.070 16:37:37 -- common/autotest_common.sh@1197 -- # return 0 00:18:00.071 16:37:37 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:00.071 16:37:37 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 --hostid=dcaf3c85-349e-474a-91c8-b5dfcb47b007 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:18:00.329 16:37:37 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:18:00.329 16:37:37 -- common/autotest_common.sh@1187 -- # local i=0 00:18:00.329 16:37:37 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:00.329 16:37:37 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:00.329 16:37:37 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:02.281 16:37:39 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:02.281 16:37:39 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:02.281 16:37:39 -- common/autotest_common.sh@1196 -- # grep -c SPDK4 00:18:02.281 16:37:39 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:02.281 16:37:39 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:02.281 16:37:39 -- common/autotest_common.sh@1197 -- # return 0 00:18:02.281 16:37:39 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:02.281 16:37:39 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 --hostid=dcaf3c85-349e-474a-91c8-b5dfcb47b007 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:18:02.540 16:37:39 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:18:02.540 16:37:39 -- common/autotest_common.sh@1187 -- # local i=0 00:18:02.540 16:37:39 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:02.540 16:37:39 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:02.540 16:37:39 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:04.442 16:37:41 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:04.442 16:37:41 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:04.442 16:37:41 -- common/autotest_common.sh@1196 -- # grep -c SPDK5 00:18:04.701 16:37:41 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:04.701 16:37:41 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:04.701 16:37:41 -- common/autotest_common.sh@1197 -- # return 0 00:18:04.701 16:37:41 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:04.701 16:37:41 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 --hostid=dcaf3c85-349e-474a-91c8-b5dfcb47b007 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:18:04.701 16:37:42 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:18:04.701 16:37:42 -- common/autotest_common.sh@1187 -- # local i=0 00:18:04.701 16:37:42 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:04.701 16:37:42 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:04.701 16:37:42 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:07.241 16:37:44 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:07.241 16:37:44 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:07.241 16:37:44 -- common/autotest_common.sh@1196 -- # grep -c SPDK6 00:18:07.241 16:37:44 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:07.242 16:37:44 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:07.242 16:37:44 -- common/autotest_common.sh@1197 -- # return 0 00:18:07.242 16:37:44 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:07.242 16:37:44 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 --hostid=dcaf3c85-349e-474a-91c8-b5dfcb47b007 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:18:07.242 16:37:44 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:18:07.242 16:37:44 -- common/autotest_common.sh@1187 -- # local i=0 00:18:07.242 16:37:44 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:07.242 16:37:44 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:07.242 16:37:44 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:09.146 16:37:46 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:09.146 16:37:46 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:09.146 16:37:46 -- common/autotest_common.sh@1196 -- # grep -c SPDK7 00:18:09.146 16:37:46 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:09.146 16:37:46 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:09.146 16:37:46 -- common/autotest_common.sh@1197 -- # return 0 00:18:09.146 16:37:46 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:09.146 16:37:46 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 --hostid=dcaf3c85-349e-474a-91c8-b5dfcb47b007 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:18:09.146 16:37:46 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:18:09.146 16:37:46 -- common/autotest_common.sh@1187 -- # local i=0 00:18:09.146 16:37:46 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:09.146 16:37:46 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:09.146 16:37:46 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:11.676 16:37:48 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:11.676 16:37:48 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:11.676 16:37:48 -- common/autotest_common.sh@1196 -- # grep -c SPDK8 00:18:11.676 16:37:48 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:11.676 16:37:48 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:11.676 16:37:48 -- common/autotest_common.sh@1197 -- # return 0 00:18:11.676 16:37:48 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:11.676 16:37:48 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 --hostid=dcaf3c85-349e-474a-91c8-b5dfcb47b007 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:18:11.676 16:37:48 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:18:11.676 16:37:48 -- common/autotest_common.sh@1187 -- # local i=0 00:18:11.676 16:37:48 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:11.676 16:37:48 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:11.676 16:37:48 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:13.579 16:37:50 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:13.579 16:37:50 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:13.579 16:37:50 -- common/autotest_common.sh@1196 -- # grep -c SPDK9 00:18:13.579 16:37:50 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:13.579 16:37:50 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:13.579 16:37:50 -- common/autotest_common.sh@1197 -- # return 0 00:18:13.579 16:37:50 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:13.579 16:37:50 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 --hostid=dcaf3c85-349e-474a-91c8-b5dfcb47b007 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:18:13.579 16:37:50 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:18:13.579 16:37:50 -- common/autotest_common.sh@1187 -- # local i=0 00:18:13.579 16:37:50 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:13.579 16:37:50 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:13.579 16:37:50 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:15.480 16:37:52 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:15.480 16:37:52 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:15.480 16:37:52 -- common/autotest_common.sh@1196 -- # grep -c SPDK10 00:18:15.738 16:37:52 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:15.738 16:37:52 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:15.738 16:37:52 -- common/autotest_common.sh@1197 -- # return 0 00:18:15.738 16:37:52 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:15.738 16:37:52 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 --hostid=dcaf3c85-349e-474a-91c8-b5dfcb47b007 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:18:15.739 16:37:53 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:18:15.739 16:37:53 -- common/autotest_common.sh@1187 -- # local i=0 00:18:15.739 16:37:53 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:15.739 16:37:53 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:15.739 16:37:53 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:18.270 16:37:55 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:18.270 16:37:55 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:18.270 16:37:55 -- common/autotest_common.sh@1196 -- # grep -c SPDK11 00:18:18.270 16:37:55 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:18.270 16:37:55 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:18.270 16:37:55 -- common/autotest_common.sh@1197 -- # return 0 00:18:18.270 16:37:55 -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:18:18.270 [global] 00:18:18.270 thread=1 00:18:18.270 invalidate=1 00:18:18.270 rw=read 00:18:18.270 time_based=1 00:18:18.270 runtime=10 00:18:18.270 ioengine=libaio 00:18:18.270 direct=1 00:18:18.270 bs=262144 00:18:18.270 iodepth=64 00:18:18.270 norandommap=1 00:18:18.270 numjobs=1 00:18:18.270 00:18:18.270 [job0] 00:18:18.270 filename=/dev/nvme0n1 00:18:18.270 [job1] 00:18:18.270 filename=/dev/nvme10n1 00:18:18.270 [job2] 00:18:18.270 filename=/dev/nvme1n1 00:18:18.270 [job3] 00:18:18.270 filename=/dev/nvme2n1 00:18:18.270 [job4] 00:18:18.270 filename=/dev/nvme3n1 00:18:18.270 [job5] 00:18:18.270 filename=/dev/nvme4n1 00:18:18.270 [job6] 00:18:18.270 filename=/dev/nvme5n1 00:18:18.270 [job7] 00:18:18.270 filename=/dev/nvme6n1 00:18:18.270 [job8] 00:18:18.270 filename=/dev/nvme7n1 00:18:18.270 [job9] 00:18:18.270 filename=/dev/nvme8n1 00:18:18.270 [job10] 00:18:18.270 filename=/dev/nvme9n1 00:18:18.270 Could not set queue depth (nvme0n1) 00:18:18.270 Could not set queue depth (nvme10n1) 00:18:18.270 Could not set queue depth (nvme1n1) 00:18:18.270 Could not set queue depth (nvme2n1) 00:18:18.270 Could not set queue depth (nvme3n1) 00:18:18.270 Could not set queue depth (nvme4n1) 00:18:18.270 Could not set queue depth (nvme5n1) 00:18:18.270 Could not set queue depth (nvme6n1) 00:18:18.270 Could not set queue depth (nvme7n1) 00:18:18.270 Could not set queue depth (nvme8n1) 00:18:18.271 Could not set queue depth (nvme9n1) 00:18:18.271 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:18.271 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:18.271 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:18.271 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:18.271 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:18.271 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:18.271 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:18.271 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:18.271 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:18.271 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:18.271 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:18.271 fio-3.35 00:18:18.271 Starting 11 threads 00:18:30.480 00:18:30.480 job0: (groupid=0, jobs=1): err= 0: pid=91290: Sat Nov 16 16:38:05 2024 00:18:30.480 read: IOPS=404, BW=101MiB/s (106MB/s)(1025MiB/10127msec) 00:18:30.480 slat (usec): min=14, max=87334, avg=2368.98, stdev=8294.17 00:18:30.480 clat (msec): min=19, max=294, avg=155.52, stdev=51.31 00:18:30.480 lat (msec): min=19, max=295, avg=157.89, stdev=52.60 00:18:30.480 clat percentiles (msec): 00:18:30.480 | 1.00th=[ 43], 5.00th=[ 66], 10.00th=[ 74], 20.00th=[ 99], 00:18:30.480 | 30.00th=[ 134], 40.00th=[ 159], 50.00th=[ 174], 60.00th=[ 182], 00:18:30.480 | 70.00th=[ 188], 80.00th=[ 197], 90.00th=[ 211], 95.00th=[ 224], 00:18:30.480 | 99.00th=[ 243], 99.50th=[ 262], 99.90th=[ 296], 99.95th=[ 296], 00:18:30.480 | 99.99th=[ 296] 00:18:30.480 bw ( KiB/s): min=66560, max=232448, per=5.90%, avg=103226.50, stdev=39269.58, samples=20 00:18:30.480 iops : min= 260, max= 908, avg=403.15, stdev=153.43, samples=20 00:18:30.480 lat (msec) : 20=0.05%, 50=1.00%, 100=19.33%, 250=78.92%, 500=0.71% 00:18:30.480 cpu : usr=0.19%, sys=1.37%, ctx=818, majf=0, minf=4097 00:18:30.480 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:18:30.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:30.480 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:30.480 issued rwts: total=4098,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:30.480 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:30.480 job1: (groupid=0, jobs=1): err= 0: pid=91291: Sat Nov 16 16:38:05 2024 00:18:30.480 read: IOPS=366, BW=91.5MiB/s (95.9MB/s)(926MiB/10120msec) 00:18:30.480 slat (usec): min=16, max=132896, avg=2622.22, stdev=9782.81 00:18:30.480 clat (msec): min=47, max=329, avg=171.88, stdev=44.12 00:18:30.480 lat (msec): min=47, max=342, avg=174.50, stdev=45.67 00:18:30.480 clat percentiles (msec): 00:18:30.480 | 1.00th=[ 55], 5.00th=[ 73], 10.00th=[ 84], 20.00th=[ 144], 00:18:30.480 | 30.00th=[ 167], 40.00th=[ 176], 50.00th=[ 182], 60.00th=[ 188], 00:18:30.480 | 70.00th=[ 194], 80.00th=[ 203], 90.00th=[ 215], 95.00th=[ 226], 00:18:30.480 | 99.00th=[ 249], 99.50th=[ 275], 99.90th=[ 309], 99.95th=[ 330], 00:18:30.480 | 99.99th=[ 330] 00:18:30.480 bw ( KiB/s): min=69632, max=173568, per=5.33%, avg=93182.20, stdev=23558.74, samples=20 00:18:30.480 iops : min= 272, max= 678, avg=363.80, stdev=92.11, samples=20 00:18:30.480 lat (msec) : 50=0.27%, 100=11.34%, 250=87.45%, 500=0.94% 00:18:30.480 cpu : usr=0.16%, sys=1.11%, ctx=789, majf=0, minf=4097 00:18:30.480 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:18:30.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:30.480 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:30.480 issued rwts: total=3704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:30.480 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:30.480 job2: (groupid=0, jobs=1): err= 0: pid=91292: Sat Nov 16 16:38:05 2024 00:18:30.480 read: IOPS=882, BW=221MiB/s (231MB/s)(2219MiB/10054msec) 00:18:30.480 slat (usec): min=20, max=136327, avg=1099.85, stdev=4851.67 00:18:30.480 clat (msec): min=13, max=299, avg=71.29, stdev=36.99 00:18:30.480 lat (msec): min=13, max=314, avg=72.39, stdev=37.73 00:18:30.480 clat percentiles (msec): 00:18:30.480 | 1.00th=[ 22], 5.00th=[ 28], 10.00th=[ 33], 20.00th=[ 40], 00:18:30.480 | 30.00th=[ 50], 40.00th=[ 63], 50.00th=[ 70], 60.00th=[ 73], 00:18:30.480 | 70.00th=[ 78], 80.00th=[ 85], 90.00th=[ 133], 95.00th=[ 157], 00:18:30.480 | 99.00th=[ 190], 99.50th=[ 197], 99.90th=[ 209], 99.95th=[ 209], 00:18:30.480 | 99.99th=[ 300] 00:18:30.480 bw ( KiB/s): min=87040, max=449024, per=12.89%, avg=225424.95, stdev=106550.02, samples=20 00:18:30.481 iops : min= 340, max= 1754, avg=880.45, stdev=416.23, samples=20 00:18:30.481 lat (msec) : 20=0.63%, 50=30.37%, 100=56.89%, 250=12.07%, 500=0.03% 00:18:30.481 cpu : usr=0.30%, sys=2.73%, ctx=1529, majf=0, minf=4098 00:18:30.481 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:18:30.481 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:30.481 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:30.481 issued rwts: total=8876,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:30.481 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:30.481 job3: (groupid=0, jobs=1): err= 0: pid=91293: Sat Nov 16 16:38:05 2024 00:18:30.481 read: IOPS=363, BW=90.9MiB/s (95.3MB/s)(921MiB/10128msec) 00:18:30.481 slat (usec): min=15, max=95906, avg=2669.66, stdev=9192.46 00:18:30.481 clat (msec): min=14, max=254, avg=173.13, stdev=40.17 00:18:30.481 lat (msec): min=15, max=307, avg=175.80, stdev=41.64 00:18:30.481 clat percentiles (msec): 00:18:30.481 | 1.00th=[ 40], 5.00th=[ 71], 10.00th=[ 130], 20.00th=[ 157], 00:18:30.481 | 30.00th=[ 169], 40.00th=[ 176], 50.00th=[ 182], 60.00th=[ 186], 00:18:30.481 | 70.00th=[ 192], 80.00th=[ 199], 90.00th=[ 218], 95.00th=[ 224], 00:18:30.481 | 99.00th=[ 241], 99.50th=[ 247], 99.90th=[ 255], 99.95th=[ 255], 00:18:30.481 | 99.99th=[ 255] 00:18:30.481 bw ( KiB/s): min=64512, max=153804, per=5.29%, avg=92535.85, stdev=18674.11, samples=20 00:18:30.481 iops : min= 252, max= 600, avg=361.35, stdev=72.83, samples=20 00:18:30.481 lat (msec) : 20=0.24%, 50=1.90%, 100=6.14%, 250=91.25%, 500=0.46% 00:18:30.481 cpu : usr=0.09%, sys=1.27%, ctx=630, majf=0, minf=4097 00:18:30.481 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:18:30.481 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:30.481 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:30.481 issued rwts: total=3682,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:30.481 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:30.481 job4: (groupid=0, jobs=1): err= 0: pid=91294: Sat Nov 16 16:38:05 2024 00:18:30.481 read: IOPS=587, BW=147MiB/s (154MB/s)(1479MiB/10065msec) 00:18:30.481 slat (usec): min=18, max=56336, avg=1667.05, stdev=5461.26 00:18:30.481 clat (msec): min=18, max=180, avg=107.05, stdev=20.06 00:18:30.481 lat (msec): min=19, max=188, avg=108.71, stdev=20.83 00:18:30.481 clat percentiles (msec): 00:18:30.481 | 1.00th=[ 53], 5.00th=[ 70], 10.00th=[ 82], 20.00th=[ 96], 00:18:30.481 | 30.00th=[ 101], 40.00th=[ 105], 50.00th=[ 108], 60.00th=[ 111], 00:18:30.481 | 70.00th=[ 115], 80.00th=[ 121], 90.00th=[ 131], 95.00th=[ 142], 00:18:30.481 | 99.00th=[ 157], 99.50th=[ 159], 99.90th=[ 169], 99.95th=[ 169], 00:18:30.481 | 99.99th=[ 182] 00:18:30.481 bw ( KiB/s): min=114459, max=208384, per=8.56%, avg=149702.95, stdev=20826.71, samples=20 00:18:30.481 iops : min= 447, max= 814, avg=584.55, stdev=81.38, samples=20 00:18:30.481 lat (msec) : 20=0.03%, 50=0.68%, 100=27.95%, 250=71.34% 00:18:30.481 cpu : usr=0.32%, sys=1.89%, ctx=1226, majf=0, minf=4097 00:18:30.481 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:18:30.481 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:30.481 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:30.481 issued rwts: total=5914,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:30.481 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:30.481 job5: (groupid=0, jobs=1): err= 0: pid=91295: Sat Nov 16 16:38:05 2024 00:18:30.481 read: IOPS=665, BW=166MiB/s (174MB/s)(1672MiB/10051msec) 00:18:30.481 slat (usec): min=16, max=125063, avg=1443.67, stdev=5801.55 00:18:30.481 clat (msec): min=20, max=346, avg=94.56, stdev=45.20 00:18:30.481 lat (msec): min=20, max=356, avg=96.00, stdev=46.12 00:18:30.481 clat percentiles (msec): 00:18:30.481 | 1.00th=[ 40], 5.00th=[ 55], 10.00th=[ 61], 20.00th=[ 66], 00:18:30.481 | 30.00th=[ 70], 40.00th=[ 74], 50.00th=[ 79], 60.00th=[ 83], 00:18:30.481 | 70.00th=[ 91], 80.00th=[ 123], 90.00th=[ 161], 95.00th=[ 211], 00:18:30.481 | 99.00th=[ 234], 99.50th=[ 247], 99.90th=[ 279], 99.95th=[ 284], 00:18:30.481 | 99.99th=[ 347] 00:18:30.481 bw ( KiB/s): min=71168, max=230552, per=9.69%, avg=169469.90, stdev=57899.36, samples=20 00:18:30.481 iops : min= 278, max= 900, avg=661.85, stdev=226.09, samples=20 00:18:30.481 lat (msec) : 50=2.36%, 100=72.70%, 250=24.51%, 500=0.43% 00:18:30.481 cpu : usr=0.29%, sys=1.91%, ctx=1470, majf=0, minf=4097 00:18:30.481 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:18:30.481 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:30.481 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:30.481 issued rwts: total=6688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:30.481 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:30.481 job6: (groupid=0, jobs=1): err= 0: pid=91296: Sat Nov 16 16:38:05 2024 00:18:30.481 read: IOPS=592, BW=148MiB/s (155MB/s)(1492MiB/10066msec) 00:18:30.481 slat (usec): min=15, max=57502, avg=1593.86, stdev=5289.53 00:18:30.481 clat (msec): min=23, max=178, avg=106.19, stdev=23.96 00:18:30.481 lat (msec): min=23, max=186, avg=107.79, stdev=24.73 00:18:30.481 clat percentiles (msec): 00:18:30.481 | 1.00th=[ 40], 5.00th=[ 59], 10.00th=[ 70], 20.00th=[ 94], 00:18:30.481 | 30.00th=[ 101], 40.00th=[ 106], 50.00th=[ 109], 60.00th=[ 113], 00:18:30.481 | 70.00th=[ 116], 80.00th=[ 122], 90.00th=[ 133], 95.00th=[ 146], 00:18:30.481 | 99.00th=[ 159], 99.50th=[ 165], 99.90th=[ 171], 99.95th=[ 174], 00:18:30.481 | 99.99th=[ 180] 00:18:30.481 bw ( KiB/s): min=107008, max=282112, per=8.64%, avg=151087.60, stdev=34746.51, samples=20 00:18:30.481 iops : min= 418, max= 1102, avg=590.00, stdev=135.77, samples=20 00:18:30.481 lat (msec) : 50=2.68%, 100=26.95%, 250=70.37% 00:18:30.481 cpu : usr=0.25%, sys=2.01%, ctx=1052, majf=0, minf=4097 00:18:30.481 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:18:30.481 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:30.481 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:30.481 issued rwts: total=5967,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:30.481 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:30.481 job7: (groupid=0, jobs=1): err= 0: pid=91297: Sat Nov 16 16:38:05 2024 00:18:30.481 read: IOPS=406, BW=102MiB/s (106MB/s)(1029MiB/10127msec) 00:18:30.481 slat (usec): min=11, max=106781, avg=2332.92, stdev=8084.12 00:18:30.481 clat (msec): min=24, max=309, avg=154.87, stdev=59.76 00:18:30.481 lat (msec): min=24, max=309, avg=157.21, stdev=61.06 00:18:30.481 clat percentiles (msec): 00:18:30.481 | 1.00th=[ 33], 5.00th=[ 55], 10.00th=[ 63], 20.00th=[ 80], 00:18:30.481 | 30.00th=[ 110], 40.00th=[ 171], 50.00th=[ 180], 60.00th=[ 186], 00:18:30.481 | 70.00th=[ 192], 80.00th=[ 199], 90.00th=[ 213], 95.00th=[ 228], 00:18:30.481 | 99.00th=[ 249], 99.50th=[ 292], 99.90th=[ 309], 99.95th=[ 309], 00:18:30.481 | 99.99th=[ 309] 00:18:30.481 bw ( KiB/s): min=71168, max=270848, per=5.93%, avg=103634.95, stdev=52304.12, samples=20 00:18:30.481 iops : min= 278, max= 1058, avg=404.70, stdev=204.35, samples=20 00:18:30.481 lat (msec) : 50=4.21%, 100=24.65%, 250=70.18%, 500=0.97% 00:18:30.481 cpu : usr=0.16%, sys=1.31%, ctx=822, majf=0, minf=4097 00:18:30.481 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:18:30.481 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:30.481 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:30.481 issued rwts: total=4114,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:30.481 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:30.481 job8: (groupid=0, jobs=1): err= 0: pid=91298: Sat Nov 16 16:38:05 2024 00:18:30.481 read: IOPS=536, BW=134MiB/s (141MB/s)(1351MiB/10061msec) 00:18:30.481 slat (usec): min=20, max=73928, avg=1803.57, stdev=6009.53 00:18:30.481 clat (msec): min=38, max=255, avg=117.20, stdev=28.80 00:18:30.481 lat (msec): min=38, max=255, avg=119.00, stdev=29.57 00:18:30.481 clat percentiles (msec): 00:18:30.481 | 1.00th=[ 63], 5.00th=[ 73], 10.00th=[ 87], 20.00th=[ 101], 00:18:30.481 | 30.00th=[ 105], 40.00th=[ 109], 50.00th=[ 113], 60.00th=[ 117], 00:18:30.481 | 70.00th=[ 122], 80.00th=[ 133], 90.00th=[ 161], 95.00th=[ 178], 00:18:30.481 | 99.00th=[ 203], 99.50th=[ 209], 99.90th=[ 232], 99.95th=[ 241], 00:18:30.481 | 99.99th=[ 255] 00:18:30.481 bw ( KiB/s): min=72704, max=200192, per=7.82%, avg=136678.05, stdev=29668.94, samples=20 00:18:30.481 iops : min= 284, max= 782, avg=533.75, stdev=115.84, samples=20 00:18:30.481 lat (msec) : 50=0.26%, 100=19.34%, 250=80.38%, 500=0.02% 00:18:30.481 cpu : usr=0.30%, sys=1.77%, ctx=1155, majf=0, minf=4097 00:18:30.481 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:18:30.481 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:30.481 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:30.481 issued rwts: total=5402,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:30.481 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:30.481 job9: (groupid=0, jobs=1): err= 0: pid=91299: Sat Nov 16 16:38:05 2024 00:18:30.481 read: IOPS=653, BW=163MiB/s (171MB/s)(1652MiB/10117msec) 00:18:30.481 slat (usec): min=20, max=122167, avg=1482.56, stdev=6519.22 00:18:30.481 clat (msec): min=16, max=324, avg=96.35, stdev=76.65 00:18:30.481 lat (msec): min=16, max=332, avg=97.83, stdev=78.04 00:18:30.481 clat percentiles (msec): 00:18:30.481 | 1.00th=[ 22], 5.00th=[ 27], 10.00th=[ 29], 20.00th=[ 33], 00:18:30.481 | 30.00th=[ 37], 40.00th=[ 40], 50.00th=[ 44], 60.00th=[ 74], 00:18:30.481 | 70.00th=[ 178], 80.00th=[ 190], 90.00th=[ 203], 95.00th=[ 218], 00:18:30.481 | 99.00th=[ 236], 99.50th=[ 249], 99.90th=[ 271], 99.95th=[ 296], 00:18:30.481 | 99.99th=[ 326] 00:18:30.481 bw ( KiB/s): min=66560, max=457728, per=9.58%, avg=167520.65, stdev=150648.61, samples=20 00:18:30.481 iops : min= 260, max= 1788, avg=654.15, stdev=588.52, samples=20 00:18:30.481 lat (msec) : 20=0.41%, 50=56.89%, 100=5.22%, 250=37.06%, 500=0.42% 00:18:30.481 cpu : usr=0.22%, sys=2.01%, ctx=1273, majf=0, minf=4097 00:18:30.481 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:18:30.481 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:30.481 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:30.481 issued rwts: total=6608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:30.481 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:30.481 job10: (groupid=0, jobs=1): err= 0: pid=91300: Sat Nov 16 16:38:05 2024 00:18:30.481 read: IOPS=1407, BW=352MiB/s (369MB/s)(3527MiB/10023msec) 00:18:30.481 slat (usec): min=18, max=99142, avg=694.62, stdev=3005.79 00:18:30.482 clat (msec): min=15, max=204, avg=44.67, stdev=22.11 00:18:30.482 lat (msec): min=15, max=280, avg=45.36, stdev=22.43 00:18:30.482 clat percentiles (msec): 00:18:30.482 | 1.00th=[ 20], 5.00th=[ 25], 10.00th=[ 28], 20.00th=[ 32], 00:18:30.482 | 30.00th=[ 34], 40.00th=[ 37], 50.00th=[ 39], 60.00th=[ 42], 00:18:30.482 | 70.00th=[ 45], 80.00th=[ 51], 90.00th=[ 73], 95.00th=[ 87], 00:18:30.482 | 99.00th=[ 122], 99.50th=[ 176], 99.90th=[ 197], 99.95th=[ 205], 00:18:30.482 | 99.99th=[ 205] 00:18:30.482 bw ( KiB/s): min=156985, max=457325, per=20.55%, avg=359232.80, stdev=118402.53, samples=20 00:18:30.482 iops : min= 613, max= 1786, avg=1403.15, stdev=462.45, samples=20 00:18:30.482 lat (msec) : 20=1.13%, 50=78.85%, 100=17.10%, 250=2.92% 00:18:30.482 cpu : usr=0.45%, sys=3.93%, ctx=2725, majf=0, minf=4097 00:18:30.482 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:18:30.482 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:30.482 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:30.482 issued rwts: total=14109,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:30.482 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:30.482 00:18:30.482 Run status group 0 (all jobs): 00:18:30.482 READ: bw=1707MiB/s (1790MB/s), 90.9MiB/s-352MiB/s (95.3MB/s-369MB/s), io=16.9GiB (18.1GB), run=10023-10128msec 00:18:30.482 00:18:30.482 Disk stats (read/write): 00:18:30.482 nvme0n1: ios=8090/0, merge=0/0, ticks=1237214/0, in_queue=1237214, util=97.61% 00:18:30.482 nvme10n1: ios=7299/0, merge=0/0, ticks=1238684/0, in_queue=1238684, util=97.79% 00:18:30.482 nvme1n1: ios=17667/0, merge=0/0, ticks=1236647/0, in_queue=1236647, util=97.81% 00:18:30.482 nvme2n1: ios=7239/0, merge=0/0, ticks=1240001/0, in_queue=1240001, util=98.21% 00:18:30.482 nvme3n1: ios=11701/0, merge=0/0, ticks=1239012/0, in_queue=1239012, util=97.99% 00:18:30.482 nvme4n1: ios=13286/0, merge=0/0, ticks=1239559/0, in_queue=1239559, util=97.74% 00:18:30.482 nvme5n1: ios=11823/0, merge=0/0, ticks=1241742/0, in_queue=1241742, util=98.40% 00:18:30.482 nvme6n1: ios=8117/0, merge=0/0, ticks=1236695/0, in_queue=1236695, util=98.42% 00:18:30.482 nvme7n1: ios=10717/0, merge=0/0, ticks=1240536/0, in_queue=1240536, util=98.36% 00:18:30.482 nvme8n1: ios=13088/0, merge=0/0, ticks=1233197/0, in_queue=1233197, util=98.66% 00:18:30.482 nvme9n1: ios=28145/0, merge=0/0, ticks=1224279/0, in_queue=1224279, util=98.56% 00:18:30.482 16:38:05 -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:18:30.482 [global] 00:18:30.482 thread=1 00:18:30.482 invalidate=1 00:18:30.482 rw=randwrite 00:18:30.482 time_based=1 00:18:30.482 runtime=10 00:18:30.482 ioengine=libaio 00:18:30.482 direct=1 00:18:30.482 bs=262144 00:18:30.482 iodepth=64 00:18:30.482 norandommap=1 00:18:30.482 numjobs=1 00:18:30.482 00:18:30.482 [job0] 00:18:30.482 filename=/dev/nvme0n1 00:18:30.482 [job1] 00:18:30.482 filename=/dev/nvme10n1 00:18:30.482 [job2] 00:18:30.482 filename=/dev/nvme1n1 00:18:30.482 [job3] 00:18:30.482 filename=/dev/nvme2n1 00:18:30.482 [job4] 00:18:30.482 filename=/dev/nvme3n1 00:18:30.482 [job5] 00:18:30.482 filename=/dev/nvme4n1 00:18:30.482 [job6] 00:18:30.482 filename=/dev/nvme5n1 00:18:30.482 [job7] 00:18:30.482 filename=/dev/nvme6n1 00:18:30.482 [job8] 00:18:30.482 filename=/dev/nvme7n1 00:18:30.482 [job9] 00:18:30.482 filename=/dev/nvme8n1 00:18:30.482 [job10] 00:18:30.482 filename=/dev/nvme9n1 00:18:30.482 Could not set queue depth (nvme0n1) 00:18:30.482 Could not set queue depth (nvme10n1) 00:18:30.482 Could not set queue depth (nvme1n1) 00:18:30.482 Could not set queue depth (nvme2n1) 00:18:30.482 Could not set queue depth (nvme3n1) 00:18:30.482 Could not set queue depth (nvme4n1) 00:18:30.482 Could not set queue depth (nvme5n1) 00:18:30.482 Could not set queue depth (nvme6n1) 00:18:30.482 Could not set queue depth (nvme7n1) 00:18:30.482 Could not set queue depth (nvme8n1) 00:18:30.482 Could not set queue depth (nvme9n1) 00:18:30.482 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:30.482 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:30.482 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:30.482 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:30.482 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:30.482 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:30.482 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:30.482 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:30.482 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:30.482 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:30.482 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:30.482 fio-3.35 00:18:30.482 Starting 11 threads 00:18:40.465 00:18:40.465 job0: (groupid=0, jobs=1): err= 0: pid=91503: Sat Nov 16 16:38:16 2024 00:18:40.465 write: IOPS=452, BW=113MiB/s (119MB/s)(1146MiB/10128msec); 0 zone resets 00:18:40.465 slat (usec): min=26, max=13609, avg=2177.71, stdev=3705.69 00:18:40.465 clat (msec): min=20, max=264, avg=139.17, stdev=11.96 00:18:40.465 lat (msec): min=20, max=264, avg=141.34, stdev=11.54 00:18:40.465 clat percentiles (msec): 00:18:40.465 | 1.00th=[ 113], 5.00th=[ 131], 10.00th=[ 133], 20.00th=[ 134], 00:18:40.465 | 30.00th=[ 138], 40.00th=[ 140], 50.00th=[ 142], 60.00th=[ 142], 00:18:40.465 | 70.00th=[ 142], 80.00th=[ 142], 90.00th=[ 144], 95.00th=[ 144], 00:18:40.465 | 99.00th=[ 167], 99.50th=[ 211], 99.90th=[ 257], 99.95th=[ 257], 00:18:40.465 | 99.99th=[ 266] 00:18:40.465 bw ( KiB/s): min=110592, max=118784, per=8.61%, avg=115686.40, stdev=1830.85, samples=20 00:18:40.465 iops : min= 432, max= 464, avg=451.90, stdev= 7.15, samples=20 00:18:40.465 lat (msec) : 50=0.35%, 100=0.52%, 250=99.02%, 500=0.11% 00:18:40.465 cpu : usr=1.38%, sys=1.29%, ctx=4988, majf=0, minf=1 00:18:40.465 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:18:40.465 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:40.465 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:40.465 issued rwts: total=0,4582,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:40.465 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:40.465 job1: (groupid=0, jobs=1): err= 0: pid=91504: Sat Nov 16 16:38:16 2024 00:18:40.465 write: IOPS=625, BW=156MiB/s (164MB/s)(1578MiB/10088msec); 0 zone resets 00:18:40.465 slat (usec): min=20, max=14043, avg=1558.90, stdev=2675.54 00:18:40.465 clat (msec): min=6, max=190, avg=100.73, stdev= 9.39 00:18:40.465 lat (msec): min=6, max=190, avg=102.29, stdev= 9.17 00:18:40.465 clat percentiles (msec): 00:18:40.465 | 1.00th=[ 69], 5.00th=[ 94], 10.00th=[ 95], 20.00th=[ 97], 00:18:40.465 | 30.00th=[ 100], 40.00th=[ 101], 50.00th=[ 102], 60.00th=[ 103], 00:18:40.465 | 70.00th=[ 104], 80.00th=[ 104], 90.00th=[ 105], 95.00th=[ 106], 00:18:40.465 | 99.00th=[ 138], 99.50th=[ 146], 99.90th=[ 178], 99.95th=[ 184], 00:18:40.465 | 99.99th=[ 190] 00:18:40.465 bw ( KiB/s): min=155648, max=162304, per=11.90%, avg=159907.55, stdev=2036.75, samples=20 00:18:40.465 iops : min= 608, max= 634, avg=624.60, stdev= 8.03, samples=20 00:18:40.465 lat (msec) : 10=0.14%, 20=0.08%, 50=0.51%, 100=32.93%, 250=66.34% 00:18:40.465 cpu : usr=1.10%, sys=1.75%, ctx=7717, majf=0, minf=1 00:18:40.465 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:18:40.465 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:40.465 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:40.465 issued rwts: total=0,6310,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:40.465 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:40.465 job2: (groupid=0, jobs=1): err= 0: pid=91516: Sat Nov 16 16:38:16 2024 00:18:40.465 write: IOPS=613, BW=153MiB/s (161MB/s)(1546MiB/10081msec); 0 zone resets 00:18:40.465 slat (usec): min=18, max=37119, avg=1612.02, stdev=2766.98 00:18:40.465 clat (msec): min=39, max=181, avg=102.72, stdev= 7.41 00:18:40.465 lat (msec): min=40, max=181, avg=104.33, stdev= 7.01 00:18:40.465 clat percentiles (msec): 00:18:40.465 | 1.00th=[ 94], 5.00th=[ 96], 10.00th=[ 97], 20.00th=[ 100], 00:18:40.465 | 30.00th=[ 102], 40.00th=[ 103], 50.00th=[ 104], 60.00th=[ 104], 00:18:40.465 | 70.00th=[ 105], 80.00th=[ 106], 90.00th=[ 107], 95.00th=[ 108], 00:18:40.465 | 99.00th=[ 140], 99.50th=[ 148], 99.90th=[ 174], 99.95th=[ 176], 00:18:40.465 | 99.99th=[ 182] 00:18:40.465 bw ( KiB/s): min=131072, max=161792, per=11.66%, avg=156615.20, stdev=6305.59, samples=20 00:18:40.465 iops : min= 512, max= 632, avg=611.70, stdev=24.64, samples=20 00:18:40.465 lat (msec) : 50=0.13%, 100=25.66%, 250=74.22% 00:18:40.465 cpu : usr=1.29%, sys=1.71%, ctx=7520, majf=0, minf=1 00:18:40.465 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:18:40.465 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:40.465 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:40.466 issued rwts: total=0,6182,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:40.466 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:40.466 job3: (groupid=0, jobs=1): err= 0: pid=91517: Sat Nov 16 16:38:16 2024 00:18:40.466 write: IOPS=624, BW=156MiB/s (164MB/s)(1574MiB/10085msec); 0 zone resets 00:18:40.466 slat (usec): min=18, max=10863, avg=1583.47, stdev=2691.70 00:18:40.466 clat (msec): min=11, max=187, avg=100.89, stdev= 8.67 00:18:40.466 lat (msec): min=11, max=187, avg=102.47, stdev= 8.39 00:18:40.466 clat percentiles (msec): 00:18:40.466 | 1.00th=[ 89], 5.00th=[ 94], 10.00th=[ 96], 20.00th=[ 97], 00:18:40.466 | 30.00th=[ 100], 40.00th=[ 101], 50.00th=[ 102], 60.00th=[ 103], 00:18:40.466 | 70.00th=[ 104], 80.00th=[ 104], 90.00th=[ 105], 95.00th=[ 107], 00:18:40.466 | 99.00th=[ 130], 99.50th=[ 140], 99.90th=[ 174], 99.95th=[ 182], 00:18:40.466 | 99.99th=[ 188] 00:18:40.466 bw ( KiB/s): min=147456, max=164352, per=11.87%, avg=159513.60, stdev=3595.43, samples=20 00:18:40.466 iops : min= 576, max= 642, avg=623.10, stdev=14.04, samples=20 00:18:40.466 lat (msec) : 20=0.24%, 50=0.25%, 100=33.57%, 250=65.94% 00:18:40.466 cpu : usr=1.20%, sys=1.65%, ctx=7426, majf=0, minf=1 00:18:40.466 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:18:40.466 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:40.466 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:40.466 issued rwts: total=0,6294,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:40.466 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:40.466 job4: (groupid=0, jobs=1): err= 0: pid=91518: Sat Nov 16 16:38:16 2024 00:18:40.466 write: IOPS=452, BW=113MiB/s (119MB/s)(1146MiB/10123msec); 0 zone resets 00:18:40.466 slat (usec): min=18, max=13480, avg=2176.54, stdev=3707.84 00:18:40.466 clat (msec): min=16, max=256, avg=139.05, stdev=11.66 00:18:40.466 lat (msec): min=16, max=256, avg=141.22, stdev=11.24 00:18:40.466 clat percentiles (msec): 00:18:40.466 | 1.00th=[ 112], 5.00th=[ 131], 10.00th=[ 133], 20.00th=[ 134], 00:18:40.466 | 30.00th=[ 138], 40.00th=[ 140], 50.00th=[ 142], 60.00th=[ 142], 00:18:40.466 | 70.00th=[ 142], 80.00th=[ 142], 90.00th=[ 144], 95.00th=[ 144], 00:18:40.466 | 99.00th=[ 159], 99.50th=[ 205], 99.90th=[ 249], 99.95th=[ 249], 00:18:40.466 | 99.99th=[ 257] 00:18:40.466 bw ( KiB/s): min=110813, max=117248, per=8.62%, avg=115774.25, stdev=1546.11, samples=20 00:18:40.466 iops : min= 432, max= 458, avg=452.20, stdev= 6.19, samples=20 00:18:40.466 lat (msec) : 20=0.09%, 50=0.26%, 100=0.52%, 250=99.08%, 500=0.04% 00:18:40.466 cpu : usr=1.39%, sys=1.37%, ctx=4699, majf=0, minf=1 00:18:40.466 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:18:40.466 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:40.466 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:40.466 issued rwts: total=0,4585,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:40.466 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:40.466 job5: (groupid=0, jobs=1): err= 0: pid=91519: Sat Nov 16 16:38:16 2024 00:18:40.466 write: IOPS=356, BW=89.1MiB/s (93.5MB/s)(906MiB/10167msec); 0 zone resets 00:18:40.466 slat (usec): min=27, max=54206, avg=2752.77, stdev=4820.69 00:18:40.466 clat (msec): min=6, max=344, avg=176.62, stdev=19.26 00:18:40.466 lat (msec): min=6, max=344, avg=179.38, stdev=18.87 00:18:40.466 clat percentiles (msec): 00:18:40.466 | 1.00th=[ 118], 5.00th=[ 165], 10.00th=[ 167], 20.00th=[ 171], 00:18:40.466 | 30.00th=[ 176], 40.00th=[ 178], 50.00th=[ 178], 60.00th=[ 180], 00:18:40.466 | 70.00th=[ 180], 80.00th=[ 182], 90.00th=[ 186], 95.00th=[ 190], 00:18:40.466 | 99.00th=[ 247], 99.50th=[ 288], 99.90th=[ 334], 99.95th=[ 347], 00:18:40.466 | 99.99th=[ 347] 00:18:40.466 bw ( KiB/s): min=86016, max=94208, per=6.79%, avg=91178.35, stdev=1984.50, samples=20 00:18:40.466 iops : min= 336, max= 368, avg=356.15, stdev= 7.77, samples=20 00:18:40.466 lat (msec) : 10=0.11%, 20=0.11%, 50=0.33%, 100=0.11%, 250=98.40% 00:18:40.466 lat (msec) : 500=0.94% 00:18:40.466 cpu : usr=0.95%, sys=1.22%, ctx=4635, majf=0, minf=1 00:18:40.466 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:18:40.466 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:40.466 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:40.466 issued rwts: total=0,3625,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:40.466 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:40.466 job6: (groupid=0, jobs=1): err= 0: pid=91520: Sat Nov 16 16:38:16 2024 00:18:40.466 write: IOPS=364, BW=91.1MiB/s (95.5MB/s)(927MiB/10168msec); 0 zone resets 00:18:40.466 slat (usec): min=18, max=19875, avg=2662.79, stdev=4639.60 00:18:40.466 clat (msec): min=12, max=337, avg=172.85, stdev=22.69 00:18:40.466 lat (msec): min=12, max=337, avg=175.52, stdev=22.61 00:18:40.466 clat percentiles (msec): 00:18:40.466 | 1.00th=[ 67], 5.00th=[ 144], 10.00th=[ 165], 20.00th=[ 169], 00:18:40.466 | 30.00th=[ 171], 40.00th=[ 176], 50.00th=[ 178], 60.00th=[ 178], 00:18:40.466 | 70.00th=[ 180], 80.00th=[ 182], 90.00th=[ 184], 95.00th=[ 186], 00:18:40.466 | 99.00th=[ 236], 99.50th=[ 284], 99.90th=[ 326], 99.95th=[ 338], 00:18:40.466 | 99.99th=[ 338] 00:18:40.466 bw ( KiB/s): min=88064, max=117760, per=6.94%, avg=93251.75, stdev=6166.76, samples=20 00:18:40.466 iops : min= 344, max= 460, avg=364.25, stdev=24.10, samples=20 00:18:40.466 lat (msec) : 20=0.19%, 50=0.54%, 100=0.89%, 250=97.57%, 500=0.81% 00:18:40.466 cpu : usr=0.97%, sys=1.04%, ctx=4946, majf=0, minf=1 00:18:40.466 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:18:40.466 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:40.466 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:40.466 issued rwts: total=0,3706,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:40.466 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:40.466 job7: (groupid=0, jobs=1): err= 0: pid=91521: Sat Nov 16 16:38:16 2024 00:18:40.466 write: IOPS=614, BW=154MiB/s (161MB/s)(1550MiB/10087msec); 0 zone resets 00:18:40.466 slat (usec): min=17, max=14804, avg=1594.10, stdev=2701.16 00:18:40.466 clat (msec): min=11, max=194, avg=102.48, stdev= 8.36 00:18:40.466 lat (msec): min=11, max=194, avg=104.07, stdev= 8.02 00:18:40.466 clat percentiles (msec): 00:18:40.466 | 1.00th=[ 93], 5.00th=[ 96], 10.00th=[ 97], 20.00th=[ 100], 00:18:40.466 | 30.00th=[ 102], 40.00th=[ 103], 50.00th=[ 103], 60.00th=[ 104], 00:18:40.466 | 70.00th=[ 105], 80.00th=[ 106], 90.00th=[ 107], 95.00th=[ 108], 00:18:40.466 | 99.00th=[ 136], 99.50th=[ 144], 99.90th=[ 180], 99.95th=[ 188], 00:18:40.466 | 99.99th=[ 194] 00:18:40.466 bw ( KiB/s): min=142336, max=160256, per=11.70%, avg=157132.80, stdev=4014.00, samples=20 00:18:40.466 iops : min= 556, max= 626, avg=613.80, stdev=15.68, samples=20 00:18:40.466 lat (msec) : 20=0.06%, 50=0.26%, 100=24.51%, 250=75.17% 00:18:40.466 cpu : usr=1.92%, sys=1.91%, ctx=7812, majf=0, minf=1 00:18:40.466 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:18:40.466 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:40.466 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:40.466 issued rwts: total=0,6201,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:40.466 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:40.466 job8: (groupid=0, jobs=1): err= 0: pid=91522: Sat Nov 16 16:38:16 2024 00:18:40.466 write: IOPS=356, BW=89.1MiB/s (93.5MB/s)(907MiB/10169msec); 0 zone resets 00:18:40.466 slat (usec): min=20, max=37576, avg=2753.40, stdev=4801.22 00:18:40.466 clat (msec): min=23, max=341, avg=176.63, stdev=21.19 00:18:40.466 lat (msec): min=23, max=341, avg=179.39, stdev=20.94 00:18:40.466 clat percentiles (msec): 00:18:40.466 | 1.00th=[ 69], 5.00th=[ 165], 10.00th=[ 167], 20.00th=[ 171], 00:18:40.466 | 30.00th=[ 176], 40.00th=[ 178], 50.00th=[ 178], 60.00th=[ 180], 00:18:40.466 | 70.00th=[ 182], 80.00th=[ 182], 90.00th=[ 186], 95.00th=[ 194], 00:18:40.466 | 99.00th=[ 241], 99.50th=[ 288], 99.90th=[ 330], 99.95th=[ 342], 00:18:40.466 | 99.99th=[ 342] 00:18:40.466 bw ( KiB/s): min=86016, max=98816, per=6.79%, avg=91221.70, stdev=2551.31, samples=20 00:18:40.466 iops : min= 336, max= 386, avg=356.30, stdev=10.00, samples=20 00:18:40.466 lat (msec) : 50=0.44%, 100=1.19%, 250=97.44%, 500=0.94% 00:18:40.466 cpu : usr=1.17%, sys=1.00%, ctx=4856, majf=0, minf=1 00:18:40.466 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:18:40.466 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:40.466 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:40.466 issued rwts: total=0,3626,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:40.466 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:40.466 job9: (groupid=0, jobs=1): err= 0: pid=91523: Sat Nov 16 16:38:16 2024 00:18:40.466 write: IOPS=453, BW=113MiB/s (119MB/s)(1146MiB/10118msec); 0 zone resets 00:18:40.466 slat (usec): min=20, max=13513, avg=2176.24, stdev=3704.53 00:18:40.466 clat (msec): min=10, max=255, avg=139.00, stdev=11.67 00:18:40.466 lat (msec): min=10, max=255, avg=141.18, stdev=11.25 00:18:40.466 clat percentiles (msec): 00:18:40.466 | 1.00th=[ 109], 5.00th=[ 131], 10.00th=[ 133], 20.00th=[ 134], 00:18:40.466 | 30.00th=[ 138], 40.00th=[ 140], 50.00th=[ 142], 60.00th=[ 142], 00:18:40.466 | 70.00th=[ 142], 80.00th=[ 142], 90.00th=[ 144], 95.00th=[ 144], 00:18:40.466 | 99.00th=[ 150], 99.50th=[ 203], 99.90th=[ 247], 99.95th=[ 247], 00:18:40.466 | 99.99th=[ 255] 00:18:40.466 bw ( KiB/s): min=112128, max=118784, per=8.62%, avg=115751.85, stdev=1803.02, samples=20 00:18:40.466 iops : min= 438, max= 464, avg=452.15, stdev= 7.05, samples=20 00:18:40.466 lat (msec) : 20=0.07%, 50=0.35%, 100=0.44%, 250=99.13%, 500=0.02% 00:18:40.466 cpu : usr=1.27%, sys=1.38%, ctx=6231, majf=0, minf=1 00:18:40.466 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:18:40.466 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:40.467 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:40.467 issued rwts: total=0,4585,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:40.467 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:40.467 job10: (groupid=0, jobs=1): err= 0: pid=91524: Sat Nov 16 16:38:16 2024 00:18:40.467 write: IOPS=362, BW=90.5MiB/s (94.9MB/s)(921MiB/10171msec); 0 zone resets 00:18:40.467 slat (usec): min=23, max=21737, avg=2672.82, stdev=4676.27 00:18:40.467 clat (msec): min=14, max=340, avg=173.98, stdev=21.45 00:18:40.467 lat (msec): min=14, max=340, avg=176.66, stdev=21.32 00:18:40.467 clat percentiles (msec): 00:18:40.467 | 1.00th=[ 88], 5.00th=[ 146], 10.00th=[ 165], 20.00th=[ 169], 00:18:40.467 | 30.00th=[ 174], 40.00th=[ 176], 50.00th=[ 178], 60.00th=[ 180], 00:18:40.467 | 70.00th=[ 180], 80.00th=[ 182], 90.00th=[ 184], 95.00th=[ 186], 00:18:40.467 | 99.00th=[ 241], 99.50th=[ 284], 99.90th=[ 330], 99.95th=[ 342], 00:18:40.467 | 99.99th=[ 342] 00:18:40.467 bw ( KiB/s): min=88064, max=113891, per=6.90%, avg=92674.30, stdev=5400.01, samples=20 00:18:40.467 iops : min= 344, max= 444, avg=361.95, stdev=20.92, samples=20 00:18:40.467 lat (msec) : 20=0.11%, 50=0.43%, 100=0.71%, 250=97.83%, 500=0.92% 00:18:40.467 cpu : usr=1.14%, sys=0.96%, ctx=3165, majf=0, minf=1 00:18:40.467 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:18:40.467 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:40.467 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:40.467 issued rwts: total=0,3683,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:40.467 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:40.467 00:18:40.467 Run status group 0 (all jobs): 00:18:40.467 WRITE: bw=1312MiB/s (1376MB/s), 89.1MiB/s-156MiB/s (93.5MB/s-164MB/s), io=13.0GiB (14.0GB), run=10081-10171msec 00:18:40.467 00:18:40.467 Disk stats (read/write): 00:18:40.467 nvme0n1: ios=49/9031, merge=0/0, ticks=46/1211837, in_queue=1211883, util=97.83% 00:18:40.467 nvme10n1: ios=49/12493, merge=0/0, ticks=73/1216879, in_queue=1216952, util=98.10% 00:18:40.467 nvme1n1: ios=40/12201, merge=0/0, ticks=32/1212209, in_queue=1212241, util=97.91% 00:18:40.467 nvme2n1: ios=13/12455, merge=0/0, ticks=18/1214586, in_queue=1214604, util=98.02% 00:18:40.467 nvme3n1: ios=15/9030, merge=0/0, ticks=38/1212230, in_queue=1212268, util=98.06% 00:18:40.467 nvme4n1: ios=0/7118, merge=0/0, ticks=0/1209921, in_queue=1209921, util=98.19% 00:18:40.467 nvme5n1: ios=0/7266, merge=0/0, ticks=0/1210750, in_queue=1210750, util=98.22% 00:18:40.467 nvme6n1: ios=0/12264, merge=0/0, ticks=0/1214402, in_queue=1214402, util=98.41% 00:18:40.467 nvme7n1: ios=0/7111, merge=0/0, ticks=0/1209649, in_queue=1209649, util=98.55% 00:18:40.467 nvme8n1: ios=0/9028, merge=0/0, ticks=0/1212119, in_queue=1212119, util=98.72% 00:18:40.467 nvme9n1: ios=0/7224, merge=0/0, ticks=0/1211155, in_queue=1211155, util=98.79% 00:18:40.467 16:38:16 -- target/multiconnection.sh@36 -- # sync 00:18:40.467 16:38:16 -- target/multiconnection.sh@37 -- # seq 1 11 00:18:40.467 16:38:16 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:40.467 16:38:16 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:40.467 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:40.467 16:38:16 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:18:40.467 16:38:16 -- common/autotest_common.sh@1208 -- # local i=0 00:18:40.467 16:38:16 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:40.467 16:38:16 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK1 00:18:40.467 16:38:16 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:18:40.467 16:38:16 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:40.467 16:38:16 -- common/autotest_common.sh@1220 -- # return 0 00:18:40.467 16:38:16 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:40.467 16:38:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.467 16:38:16 -- common/autotest_common.sh@10 -- # set +x 00:18:40.467 16:38:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.467 16:38:16 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:40.467 16:38:16 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:18:40.467 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:18:40.467 16:38:16 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:18:40.467 16:38:16 -- common/autotest_common.sh@1208 -- # local i=0 00:18:40.467 16:38:16 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:40.467 16:38:16 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK2 00:18:40.467 16:38:16 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:18:40.467 16:38:16 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:40.467 16:38:16 -- common/autotest_common.sh@1220 -- # return 0 00:18:40.467 16:38:16 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:18:40.467 16:38:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.467 16:38:16 -- common/autotest_common.sh@10 -- # set +x 00:18:40.467 16:38:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.467 16:38:16 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:40.467 16:38:16 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:18:40.467 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:18:40.467 16:38:17 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:18:40.467 16:38:17 -- common/autotest_common.sh@1208 -- # local i=0 00:18:40.467 16:38:17 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:40.467 16:38:17 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK3 00:18:40.467 16:38:17 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:40.467 16:38:17 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:18:40.467 16:38:17 -- common/autotest_common.sh@1220 -- # return 0 00:18:40.467 16:38:17 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:18:40.467 16:38:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.467 16:38:17 -- common/autotest_common.sh@10 -- # set +x 00:18:40.467 16:38:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.467 16:38:17 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:40.467 16:38:17 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:18:40.467 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:18:40.467 16:38:17 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:18:40.467 16:38:17 -- common/autotest_common.sh@1208 -- # local i=0 00:18:40.467 16:38:17 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:40.467 16:38:17 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK4 00:18:40.467 16:38:17 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:40.467 16:38:17 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:18:40.467 16:38:17 -- common/autotest_common.sh@1220 -- # return 0 00:18:40.467 16:38:17 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:18:40.467 16:38:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.467 16:38:17 -- common/autotest_common.sh@10 -- # set +x 00:18:40.467 16:38:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.467 16:38:17 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:40.467 16:38:17 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:18:40.467 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:18:40.467 16:38:17 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:18:40.467 16:38:17 -- common/autotest_common.sh@1208 -- # local i=0 00:18:40.467 16:38:17 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:40.467 16:38:17 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK5 00:18:40.467 16:38:17 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:40.467 16:38:17 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:18:40.467 16:38:17 -- common/autotest_common.sh@1220 -- # return 0 00:18:40.467 16:38:17 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:18:40.467 16:38:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.467 16:38:17 -- common/autotest_common.sh@10 -- # set +x 00:18:40.467 16:38:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.467 16:38:17 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:40.467 16:38:17 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:18:40.467 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:18:40.467 16:38:17 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:18:40.467 16:38:17 -- common/autotest_common.sh@1208 -- # local i=0 00:18:40.467 16:38:17 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:40.467 16:38:17 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK6 00:18:40.467 16:38:17 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:40.467 16:38:17 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:18:40.468 16:38:17 -- common/autotest_common.sh@1220 -- # return 0 00:18:40.468 16:38:17 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:18:40.468 16:38:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.468 16:38:17 -- common/autotest_common.sh@10 -- # set +x 00:18:40.468 16:38:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.468 16:38:17 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:40.468 16:38:17 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:18:40.468 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:18:40.468 16:38:17 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:18:40.468 16:38:17 -- common/autotest_common.sh@1208 -- # local i=0 00:18:40.468 16:38:17 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:40.468 16:38:17 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK7 00:18:40.468 16:38:17 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:18:40.468 16:38:17 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:40.468 16:38:17 -- common/autotest_common.sh@1220 -- # return 0 00:18:40.468 16:38:17 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:18:40.468 16:38:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.468 16:38:17 -- common/autotest_common.sh@10 -- # set +x 00:18:40.468 16:38:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.468 16:38:17 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:40.468 16:38:17 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:18:40.468 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:18:40.468 16:38:17 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:18:40.468 16:38:17 -- common/autotest_common.sh@1208 -- # local i=0 00:18:40.468 16:38:17 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:40.468 16:38:17 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK8 00:18:40.468 16:38:17 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:40.468 16:38:17 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:18:40.468 16:38:17 -- common/autotest_common.sh@1220 -- # return 0 00:18:40.468 16:38:17 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:18:40.468 16:38:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.468 16:38:17 -- common/autotest_common.sh@10 -- # set +x 00:18:40.468 16:38:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.468 16:38:17 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:40.468 16:38:17 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:18:40.468 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:18:40.468 16:38:17 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:18:40.468 16:38:17 -- common/autotest_common.sh@1208 -- # local i=0 00:18:40.468 16:38:17 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:40.468 16:38:17 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK9 00:18:40.468 16:38:17 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:18:40.468 16:38:17 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:40.468 16:38:17 -- common/autotest_common.sh@1220 -- # return 0 00:18:40.468 16:38:17 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:18:40.468 16:38:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.468 16:38:17 -- common/autotest_common.sh@10 -- # set +x 00:18:40.468 16:38:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.468 16:38:17 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:40.468 16:38:17 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:18:40.468 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:18:40.468 16:38:17 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:18:40.468 16:38:17 -- common/autotest_common.sh@1208 -- # local i=0 00:18:40.468 16:38:17 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:40.468 16:38:17 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK10 00:18:40.468 16:38:17 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:18:40.468 16:38:17 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:40.468 16:38:17 -- common/autotest_common.sh@1220 -- # return 0 00:18:40.468 16:38:17 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:18:40.468 16:38:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.468 16:38:17 -- common/autotest_common.sh@10 -- # set +x 00:18:40.468 16:38:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.468 16:38:17 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:40.468 16:38:17 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:18:40.727 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:18:40.727 16:38:18 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:18:40.727 16:38:18 -- common/autotest_common.sh@1208 -- # local i=0 00:18:40.727 16:38:18 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:18:40.727 16:38:18 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK11 00:18:40.727 16:38:18 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:18:40.727 16:38:18 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:18:40.727 16:38:18 -- common/autotest_common.sh@1220 -- # return 0 00:18:40.727 16:38:18 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:18:40.727 16:38:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.727 16:38:18 -- common/autotest_common.sh@10 -- # set +x 00:18:40.727 16:38:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.727 16:38:18 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:18:40.727 16:38:18 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:18:40.727 16:38:18 -- target/multiconnection.sh@47 -- # nvmftestfini 00:18:40.727 16:38:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:40.727 16:38:18 -- nvmf/common.sh@116 -- # sync 00:18:40.727 16:38:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:40.727 16:38:18 -- nvmf/common.sh@119 -- # set +e 00:18:40.727 16:38:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:40.727 16:38:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:40.727 rmmod nvme_tcp 00:18:40.727 rmmod nvme_fabrics 00:18:40.727 rmmod nvme_keyring 00:18:40.727 16:38:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:40.728 16:38:18 -- nvmf/common.sh@123 -- # set -e 00:18:40.728 16:38:18 -- nvmf/common.sh@124 -- # return 0 00:18:40.728 16:38:18 -- nvmf/common.sh@477 -- # '[' -n 90812 ']' 00:18:40.728 16:38:18 -- nvmf/common.sh@478 -- # killprocess 90812 00:18:40.728 16:38:18 -- common/autotest_common.sh@936 -- # '[' -z 90812 ']' 00:18:40.728 16:38:18 -- common/autotest_common.sh@940 -- # kill -0 90812 00:18:40.728 16:38:18 -- common/autotest_common.sh@941 -- # uname 00:18:40.728 16:38:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:40.728 16:38:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90812 00:18:40.728 killing process with pid 90812 00:18:40.728 16:38:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:40.728 16:38:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:40.728 16:38:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90812' 00:18:40.728 16:38:18 -- common/autotest_common.sh@955 -- # kill 90812 00:18:40.728 16:38:18 -- common/autotest_common.sh@960 -- # wait 90812 00:18:41.296 16:38:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:41.296 16:38:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:41.296 16:38:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:41.296 16:38:18 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:41.296 16:38:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:41.296 16:38:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:41.296 16:38:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:41.296 16:38:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:41.296 16:38:18 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:41.296 00:18:41.296 real 0m49.974s 00:18:41.296 user 2m48.424s 00:18:41.296 sys 0m24.637s 00:18:41.296 16:38:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:41.296 16:38:18 -- common/autotest_common.sh@10 -- # set +x 00:18:41.296 ************************************ 00:18:41.296 END TEST nvmf_multiconnection 00:18:41.296 ************************************ 00:18:41.296 16:38:18 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:18:41.296 16:38:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:41.296 16:38:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:41.296 16:38:18 -- common/autotest_common.sh@10 -- # set +x 00:18:41.296 ************************************ 00:18:41.296 START TEST nvmf_initiator_timeout 00:18:41.296 ************************************ 00:18:41.296 16:38:18 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:18:41.296 * Looking for test storage... 00:18:41.296 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:41.296 16:38:18 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:41.296 16:38:18 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:41.296 16:38:18 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:41.556 16:38:18 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:41.556 16:38:18 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:41.556 16:38:18 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:41.556 16:38:18 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:41.556 16:38:18 -- scripts/common.sh@335 -- # IFS=.-: 00:18:41.556 16:38:18 -- scripts/common.sh@335 -- # read -ra ver1 00:18:41.556 16:38:18 -- scripts/common.sh@336 -- # IFS=.-: 00:18:41.556 16:38:18 -- scripts/common.sh@336 -- # read -ra ver2 00:18:41.556 16:38:18 -- scripts/common.sh@337 -- # local 'op=<' 00:18:41.556 16:38:18 -- scripts/common.sh@339 -- # ver1_l=2 00:18:41.556 16:38:18 -- scripts/common.sh@340 -- # ver2_l=1 00:18:41.556 16:38:18 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:41.556 16:38:18 -- scripts/common.sh@343 -- # case "$op" in 00:18:41.556 16:38:18 -- scripts/common.sh@344 -- # : 1 00:18:41.556 16:38:18 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:41.556 16:38:18 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:41.556 16:38:18 -- scripts/common.sh@364 -- # decimal 1 00:18:41.556 16:38:18 -- scripts/common.sh@352 -- # local d=1 00:18:41.556 16:38:18 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:41.556 16:38:18 -- scripts/common.sh@354 -- # echo 1 00:18:41.556 16:38:18 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:41.556 16:38:18 -- scripts/common.sh@365 -- # decimal 2 00:18:41.556 16:38:18 -- scripts/common.sh@352 -- # local d=2 00:18:41.556 16:38:18 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:41.556 16:38:18 -- scripts/common.sh@354 -- # echo 2 00:18:41.556 16:38:18 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:41.556 16:38:18 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:41.556 16:38:18 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:41.556 16:38:18 -- scripts/common.sh@367 -- # return 0 00:18:41.556 16:38:18 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:41.556 16:38:18 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:41.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:41.556 --rc genhtml_branch_coverage=1 00:18:41.556 --rc genhtml_function_coverage=1 00:18:41.556 --rc genhtml_legend=1 00:18:41.556 --rc geninfo_all_blocks=1 00:18:41.556 --rc geninfo_unexecuted_blocks=1 00:18:41.556 00:18:41.556 ' 00:18:41.556 16:38:18 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:41.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:41.556 --rc genhtml_branch_coverage=1 00:18:41.556 --rc genhtml_function_coverage=1 00:18:41.556 --rc genhtml_legend=1 00:18:41.556 --rc geninfo_all_blocks=1 00:18:41.556 --rc geninfo_unexecuted_blocks=1 00:18:41.556 00:18:41.556 ' 00:18:41.556 16:38:18 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:41.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:41.556 --rc genhtml_branch_coverage=1 00:18:41.556 --rc genhtml_function_coverage=1 00:18:41.556 --rc genhtml_legend=1 00:18:41.556 --rc geninfo_all_blocks=1 00:18:41.556 --rc geninfo_unexecuted_blocks=1 00:18:41.556 00:18:41.556 ' 00:18:41.557 16:38:18 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:41.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:41.557 --rc genhtml_branch_coverage=1 00:18:41.557 --rc genhtml_function_coverage=1 00:18:41.557 --rc genhtml_legend=1 00:18:41.557 --rc geninfo_all_blocks=1 00:18:41.557 --rc geninfo_unexecuted_blocks=1 00:18:41.557 00:18:41.557 ' 00:18:41.557 16:38:18 -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:41.557 16:38:18 -- nvmf/common.sh@7 -- # uname -s 00:18:41.557 16:38:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:41.557 16:38:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:41.557 16:38:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:41.557 16:38:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:41.557 16:38:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:41.557 16:38:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:41.557 16:38:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:41.557 16:38:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:41.557 16:38:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:41.557 16:38:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:41.557 16:38:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:18:41.557 16:38:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:18:41.557 16:38:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:41.557 16:38:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:41.557 16:38:18 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:41.557 16:38:18 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:41.557 16:38:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:41.557 16:38:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:41.557 16:38:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:41.557 16:38:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.557 16:38:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.557 16:38:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.557 16:38:18 -- paths/export.sh@5 -- # export PATH 00:18:41.557 16:38:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.557 16:38:18 -- nvmf/common.sh@46 -- # : 0 00:18:41.557 16:38:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:41.557 16:38:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:41.557 16:38:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:41.557 16:38:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:41.557 16:38:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:41.557 16:38:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:41.557 16:38:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:41.557 16:38:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:41.557 16:38:18 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:41.557 16:38:18 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:41.557 16:38:18 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:18:41.557 16:38:18 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:41.557 16:38:18 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:41.557 16:38:18 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:41.557 16:38:18 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:41.557 16:38:18 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:41.557 16:38:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:41.557 16:38:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:41.557 16:38:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:41.557 16:38:18 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:41.557 16:38:18 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:41.557 16:38:18 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:41.557 16:38:18 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:41.557 16:38:18 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:41.557 16:38:18 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:41.557 16:38:18 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:41.557 16:38:18 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:41.557 16:38:18 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:41.557 16:38:18 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:41.557 16:38:18 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:41.557 16:38:18 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:41.557 16:38:18 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:41.557 16:38:18 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:41.557 16:38:18 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:41.557 16:38:18 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:41.557 16:38:18 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:41.557 16:38:18 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:41.557 16:38:18 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:41.557 16:38:18 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:41.557 Cannot find device "nvmf_tgt_br" 00:18:41.557 16:38:18 -- nvmf/common.sh@154 -- # true 00:18:41.557 16:38:18 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:41.557 Cannot find device "nvmf_tgt_br2" 00:18:41.557 16:38:18 -- nvmf/common.sh@155 -- # true 00:18:41.557 16:38:18 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:41.557 16:38:18 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:41.557 Cannot find device "nvmf_tgt_br" 00:18:41.557 16:38:18 -- nvmf/common.sh@157 -- # true 00:18:41.557 16:38:18 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:41.557 Cannot find device "nvmf_tgt_br2" 00:18:41.557 16:38:18 -- nvmf/common.sh@158 -- # true 00:18:41.557 16:38:18 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:41.557 16:38:19 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:41.557 16:38:19 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:41.557 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:41.557 16:38:19 -- nvmf/common.sh@161 -- # true 00:18:41.557 16:38:19 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:41.557 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:41.557 16:38:19 -- nvmf/common.sh@162 -- # true 00:18:41.557 16:38:19 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:41.557 16:38:19 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:41.557 16:38:19 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:41.816 16:38:19 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:41.816 16:38:19 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:41.816 16:38:19 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:41.816 16:38:19 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:41.816 16:38:19 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:41.816 16:38:19 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:41.816 16:38:19 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:41.816 16:38:19 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:41.816 16:38:19 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:41.816 16:38:19 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:41.816 16:38:19 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:41.816 16:38:19 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:41.816 16:38:19 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:41.816 16:38:19 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:41.816 16:38:19 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:41.816 16:38:19 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:41.816 16:38:19 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:41.816 16:38:19 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:41.816 16:38:19 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:41.816 16:38:19 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:41.816 16:38:19 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:41.816 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:41.816 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:18:41.816 00:18:41.816 --- 10.0.0.2 ping statistics --- 00:18:41.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:41.816 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:18:41.816 16:38:19 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:41.816 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:41.816 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:18:41.816 00:18:41.816 --- 10.0.0.3 ping statistics --- 00:18:41.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:41.816 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:18:41.816 16:38:19 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:41.816 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:41.816 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:18:41.816 00:18:41.816 --- 10.0.0.1 ping statistics --- 00:18:41.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:41.816 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:18:41.816 16:38:19 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:41.816 16:38:19 -- nvmf/common.sh@421 -- # return 0 00:18:41.816 16:38:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:41.816 16:38:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:41.816 16:38:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:41.816 16:38:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:41.816 16:38:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:41.816 16:38:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:41.816 16:38:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:41.816 16:38:19 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:18:41.816 16:38:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:41.816 16:38:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:41.816 16:38:19 -- common/autotest_common.sh@10 -- # set +x 00:18:41.816 16:38:19 -- nvmf/common.sh@469 -- # nvmfpid=91896 00:18:41.816 16:38:19 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:41.816 16:38:19 -- nvmf/common.sh@470 -- # waitforlisten 91896 00:18:41.816 16:38:19 -- common/autotest_common.sh@829 -- # '[' -z 91896 ']' 00:18:41.816 16:38:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:41.816 16:38:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:41.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:41.816 16:38:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:41.816 16:38:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:41.816 16:38:19 -- common/autotest_common.sh@10 -- # set +x 00:18:41.816 [2024-11-16 16:38:19.282577] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:41.816 [2024-11-16 16:38:19.282639] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:42.076 [2024-11-16 16:38:19.415352] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:42.076 [2024-11-16 16:38:19.473415] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:42.076 [2024-11-16 16:38:19.473600] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:42.076 [2024-11-16 16:38:19.473612] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:42.076 [2024-11-16 16:38:19.473620] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:42.076 [2024-11-16 16:38:19.473752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:42.076 [2024-11-16 16:38:19.474366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:42.076 [2024-11-16 16:38:19.474504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:42.076 [2024-11-16 16:38:19.474516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:43.012 16:38:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:43.012 16:38:20 -- common/autotest_common.sh@862 -- # return 0 00:18:43.012 16:38:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:43.012 16:38:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:43.012 16:38:20 -- common/autotest_common.sh@10 -- # set +x 00:18:43.012 16:38:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:43.012 16:38:20 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:43.012 16:38:20 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:43.012 16:38:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.012 16:38:20 -- common/autotest_common.sh@10 -- # set +x 00:18:43.012 Malloc0 00:18:43.012 16:38:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.012 16:38:20 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:18:43.012 16:38:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.012 16:38:20 -- common/autotest_common.sh@10 -- # set +x 00:18:43.012 Delay0 00:18:43.012 16:38:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.012 16:38:20 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:43.012 16:38:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.012 16:38:20 -- common/autotest_common.sh@10 -- # set +x 00:18:43.012 [2024-11-16 16:38:20.442309] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:43.012 16:38:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.012 16:38:20 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:43.012 16:38:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.012 16:38:20 -- common/autotest_common.sh@10 -- # set +x 00:18:43.012 16:38:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.012 16:38:20 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:43.012 16:38:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.012 16:38:20 -- common/autotest_common.sh@10 -- # set +x 00:18:43.012 16:38:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.012 16:38:20 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:43.012 16:38:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.012 16:38:20 -- common/autotest_common.sh@10 -- # set +x 00:18:43.012 [2024-11-16 16:38:20.470544] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:43.012 16:38:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.012 16:38:20 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 --hostid=dcaf3c85-349e-474a-91c8-b5dfcb47b007 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:43.271 16:38:20 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:18:43.271 16:38:20 -- common/autotest_common.sh@1187 -- # local i=0 00:18:43.271 16:38:20 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:43.271 16:38:20 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:43.271 16:38:20 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:45.176 16:38:22 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:45.176 16:38:22 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:45.176 16:38:22 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:18:45.435 16:38:22 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:45.435 16:38:22 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:45.435 16:38:22 -- common/autotest_common.sh@1197 -- # return 0 00:18:45.435 16:38:22 -- target/initiator_timeout.sh@35 -- # fio_pid=91984 00:18:45.435 16:38:22 -- target/initiator_timeout.sh@37 -- # sleep 3 00:18:45.435 16:38:22 -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:18:45.435 [global] 00:18:45.435 thread=1 00:18:45.435 invalidate=1 00:18:45.435 rw=write 00:18:45.435 time_based=1 00:18:45.435 runtime=60 00:18:45.435 ioengine=libaio 00:18:45.435 direct=1 00:18:45.435 bs=4096 00:18:45.435 iodepth=1 00:18:45.435 norandommap=0 00:18:45.435 numjobs=1 00:18:45.435 00:18:45.435 verify_dump=1 00:18:45.435 verify_backlog=512 00:18:45.435 verify_state_save=0 00:18:45.435 do_verify=1 00:18:45.435 verify=crc32c-intel 00:18:45.435 [job0] 00:18:45.435 filename=/dev/nvme0n1 00:18:45.435 Could not set queue depth (nvme0n1) 00:18:45.435 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:45.435 fio-3.35 00:18:45.435 Starting 1 thread 00:18:48.750 16:38:25 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:18:48.750 16:38:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.750 16:38:25 -- common/autotest_common.sh@10 -- # set +x 00:18:48.750 true 00:18:48.750 16:38:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.750 16:38:25 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:18:48.750 16:38:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.750 16:38:25 -- common/autotest_common.sh@10 -- # set +x 00:18:48.750 true 00:18:48.750 16:38:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.750 16:38:25 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:18:48.750 16:38:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.750 16:38:25 -- common/autotest_common.sh@10 -- # set +x 00:18:48.750 true 00:18:48.750 16:38:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.750 16:38:25 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:18:48.750 16:38:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.750 16:38:25 -- common/autotest_common.sh@10 -- # set +x 00:18:48.750 true 00:18:48.750 16:38:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.750 16:38:25 -- target/initiator_timeout.sh@45 -- # sleep 3 00:18:51.283 16:38:28 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:18:51.283 16:38:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.283 16:38:28 -- common/autotest_common.sh@10 -- # set +x 00:18:51.283 true 00:18:51.283 16:38:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.283 16:38:28 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:18:51.283 16:38:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.283 16:38:28 -- common/autotest_common.sh@10 -- # set +x 00:18:51.283 true 00:18:51.283 16:38:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.283 16:38:28 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:18:51.283 16:38:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.283 16:38:28 -- common/autotest_common.sh@10 -- # set +x 00:18:51.283 true 00:18:51.283 16:38:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.283 16:38:28 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:18:51.283 16:38:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.283 16:38:28 -- common/autotest_common.sh@10 -- # set +x 00:18:51.283 true 00:18:51.283 16:38:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.283 16:38:28 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:18:51.283 16:38:28 -- target/initiator_timeout.sh@54 -- # wait 91984 00:19:47.601 00:19:47.601 job0: (groupid=0, jobs=1): err= 0: pid=92005: Sat Nov 16 16:39:22 2024 00:19:47.601 read: IOPS=824, BW=3298KiB/s (3377kB/s)(193MiB/60000msec) 00:19:47.601 slat (usec): min=10, max=8588, avg=13.53, stdev=51.66 00:19:47.601 clat (usec): min=152, max=40744k, avg=1020.00, stdev=183190.62 00:19:47.601 lat (usec): min=164, max=40744k, avg=1033.53, stdev=183190.64 00:19:47.601 clat percentiles (usec): 00:19:47.601 | 1.00th=[ 165], 5.00th=[ 174], 10.00th=[ 178], 20.00th=[ 182], 00:19:47.601 | 30.00th=[ 186], 40.00th=[ 190], 50.00th=[ 194], 60.00th=[ 198], 00:19:47.601 | 70.00th=[ 202], 80.00th=[ 208], 90.00th=[ 219], 95.00th=[ 229], 00:19:47.601 | 99.00th=[ 258], 99.50th=[ 289], 99.90th=[ 506], 99.95th=[ 586], 00:19:47.601 | 99.99th=[ 1631] 00:19:47.602 write: IOPS=827, BW=3311KiB/s (3390kB/s)(194MiB/60000msec); 0 zone resets 00:19:47.602 slat (usec): min=16, max=724, avg=19.73, stdev= 7.88 00:19:47.602 clat (usec): min=118, max=2176, avg=156.18, stdev=28.42 00:19:47.602 lat (usec): min=136, max=2195, avg=175.91, stdev=30.35 00:19:47.602 clat percentiles (usec): 00:19:47.602 | 1.00th=[ 130], 5.00th=[ 135], 10.00th=[ 139], 20.00th=[ 143], 00:19:47.602 | 30.00th=[ 147], 40.00th=[ 149], 50.00th=[ 153], 60.00th=[ 155], 00:19:47.602 | 70.00th=[ 159], 80.00th=[ 165], 90.00th=[ 176], 95.00th=[ 188], 00:19:47.602 | 99.00th=[ 219], 99.50th=[ 251], 99.90th=[ 529], 99.95th=[ 685], 00:19:47.602 | 99.99th=[ 1037] 00:19:47.602 bw ( KiB/s): min= 6000, max=12288, per=100.00%, avg=10240.00, stdev=1416.65, samples=38 00:19:47.602 iops : min= 1500, max= 3072, avg=2560.00, stdev=354.16, samples=38 00:19:47.602 lat (usec) : 250=99.00%, 500=0.89%, 750=0.08%, 1000=0.02% 00:19:47.602 lat (msec) : 2=0.01%, 4=0.01%, >=2000=0.01% 00:19:47.602 cpu : usr=0.48%, sys=2.02%, ctx=99191, majf=0, minf=5 00:19:47.602 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:47.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:47.602 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:47.602 issued rwts: total=49467,49664,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:47.602 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:47.602 00:19:47.602 Run status group 0 (all jobs): 00:19:47.602 READ: bw=3298KiB/s (3377kB/s), 3298KiB/s-3298KiB/s (3377kB/s-3377kB/s), io=193MiB (203MB), run=60000-60000msec 00:19:47.602 WRITE: bw=3311KiB/s (3390kB/s), 3311KiB/s-3311KiB/s (3390kB/s-3390kB/s), io=194MiB (203MB), run=60000-60000msec 00:19:47.602 00:19:47.602 Disk stats (read/write): 00:19:47.602 nvme0n1: ios=49449/49447, merge=0/0, ticks=10041/8230, in_queue=18271, util=99.78% 00:19:47.602 16:39:22 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:47.602 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:47.602 16:39:23 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:47.602 16:39:23 -- common/autotest_common.sh@1208 -- # local i=0 00:19:47.602 16:39:23 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:47.602 16:39:23 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:47.602 16:39:23 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:47.602 16:39:23 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:47.602 nvmf hotplug test: fio successful as expected 00:19:47.602 16:39:23 -- common/autotest_common.sh@1220 -- # return 0 00:19:47.602 16:39:23 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:19:47.602 16:39:23 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:19:47.602 16:39:23 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:47.602 16:39:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.602 16:39:23 -- common/autotest_common.sh@10 -- # set +x 00:19:47.602 16:39:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.602 16:39:23 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:19:47.602 16:39:23 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:19:47.602 16:39:23 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:19:47.602 16:39:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:47.602 16:39:23 -- nvmf/common.sh@116 -- # sync 00:19:47.602 16:39:23 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:47.602 16:39:23 -- nvmf/common.sh@119 -- # set +e 00:19:47.602 16:39:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:47.602 16:39:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:47.602 rmmod nvme_tcp 00:19:47.602 rmmod nvme_fabrics 00:19:47.602 rmmod nvme_keyring 00:19:47.602 16:39:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:47.602 16:39:23 -- nvmf/common.sh@123 -- # set -e 00:19:47.602 16:39:23 -- nvmf/common.sh@124 -- # return 0 00:19:47.602 16:39:23 -- nvmf/common.sh@477 -- # '[' -n 91896 ']' 00:19:47.602 16:39:23 -- nvmf/common.sh@478 -- # killprocess 91896 00:19:47.602 16:39:23 -- common/autotest_common.sh@936 -- # '[' -z 91896 ']' 00:19:47.602 16:39:23 -- common/autotest_common.sh@940 -- # kill -0 91896 00:19:47.602 16:39:23 -- common/autotest_common.sh@941 -- # uname 00:19:47.602 16:39:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:47.602 16:39:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 91896 00:19:47.602 killing process with pid 91896 00:19:47.602 16:39:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:47.602 16:39:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:47.602 16:39:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 91896' 00:19:47.602 16:39:23 -- common/autotest_common.sh@955 -- # kill 91896 00:19:47.602 16:39:23 -- common/autotest_common.sh@960 -- # wait 91896 00:19:47.602 16:39:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:47.602 16:39:23 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:47.602 16:39:23 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:47.602 16:39:23 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:47.602 16:39:23 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:47.602 16:39:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:47.602 16:39:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:47.602 16:39:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:47.602 16:39:23 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:47.602 00:19:47.602 real 1m4.887s 00:19:47.602 user 4m8.880s 00:19:47.602 sys 0m7.412s 00:19:47.602 16:39:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:47.602 16:39:23 -- common/autotest_common.sh@10 -- # set +x 00:19:47.602 ************************************ 00:19:47.602 END TEST nvmf_initiator_timeout 00:19:47.602 ************************************ 00:19:47.602 16:39:23 -- nvmf/nvmf.sh@69 -- # [[ virt == phy ]] 00:19:47.602 16:39:23 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:19:47.602 16:39:23 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:47.602 16:39:23 -- common/autotest_common.sh@10 -- # set +x 00:19:47.602 16:39:23 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:19:47.602 16:39:23 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:47.602 16:39:23 -- common/autotest_common.sh@10 -- # set +x 00:19:47.602 16:39:23 -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:19:47.602 16:39:23 -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:19:47.602 16:39:23 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:47.602 16:39:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:47.602 16:39:23 -- common/autotest_common.sh@10 -- # set +x 00:19:47.602 ************************************ 00:19:47.602 START TEST nvmf_multicontroller 00:19:47.602 ************************************ 00:19:47.602 16:39:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:19:47.602 * Looking for test storage... 00:19:47.602 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:47.602 16:39:23 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:47.602 16:39:23 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:47.602 16:39:23 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:47.602 16:39:23 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:47.602 16:39:23 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:47.602 16:39:23 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:47.602 16:39:23 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:47.602 16:39:23 -- scripts/common.sh@335 -- # IFS=.-: 00:19:47.602 16:39:23 -- scripts/common.sh@335 -- # read -ra ver1 00:19:47.602 16:39:23 -- scripts/common.sh@336 -- # IFS=.-: 00:19:47.602 16:39:23 -- scripts/common.sh@336 -- # read -ra ver2 00:19:47.602 16:39:23 -- scripts/common.sh@337 -- # local 'op=<' 00:19:47.602 16:39:23 -- scripts/common.sh@339 -- # ver1_l=2 00:19:47.602 16:39:23 -- scripts/common.sh@340 -- # ver2_l=1 00:19:47.602 16:39:23 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:47.602 16:39:23 -- scripts/common.sh@343 -- # case "$op" in 00:19:47.602 16:39:23 -- scripts/common.sh@344 -- # : 1 00:19:47.602 16:39:23 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:47.602 16:39:23 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:47.602 16:39:23 -- scripts/common.sh@364 -- # decimal 1 00:19:47.602 16:39:23 -- scripts/common.sh@352 -- # local d=1 00:19:47.602 16:39:23 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:47.602 16:39:23 -- scripts/common.sh@354 -- # echo 1 00:19:47.602 16:39:23 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:47.602 16:39:23 -- scripts/common.sh@365 -- # decimal 2 00:19:47.602 16:39:23 -- scripts/common.sh@352 -- # local d=2 00:19:47.602 16:39:23 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:47.602 16:39:23 -- scripts/common.sh@354 -- # echo 2 00:19:47.602 16:39:23 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:47.602 16:39:23 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:47.602 16:39:23 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:47.602 16:39:23 -- scripts/common.sh@367 -- # return 0 00:19:47.602 16:39:23 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:47.602 16:39:23 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:47.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.602 --rc genhtml_branch_coverage=1 00:19:47.602 --rc genhtml_function_coverage=1 00:19:47.602 --rc genhtml_legend=1 00:19:47.602 --rc geninfo_all_blocks=1 00:19:47.602 --rc geninfo_unexecuted_blocks=1 00:19:47.602 00:19:47.602 ' 00:19:47.602 16:39:23 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:47.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.602 --rc genhtml_branch_coverage=1 00:19:47.602 --rc genhtml_function_coverage=1 00:19:47.602 --rc genhtml_legend=1 00:19:47.602 --rc geninfo_all_blocks=1 00:19:47.602 --rc geninfo_unexecuted_blocks=1 00:19:47.602 00:19:47.602 ' 00:19:47.602 16:39:23 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:47.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.602 --rc genhtml_branch_coverage=1 00:19:47.602 --rc genhtml_function_coverage=1 00:19:47.603 --rc genhtml_legend=1 00:19:47.603 --rc geninfo_all_blocks=1 00:19:47.603 --rc geninfo_unexecuted_blocks=1 00:19:47.603 00:19:47.603 ' 00:19:47.603 16:39:23 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:47.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.603 --rc genhtml_branch_coverage=1 00:19:47.603 --rc genhtml_function_coverage=1 00:19:47.603 --rc genhtml_legend=1 00:19:47.603 --rc geninfo_all_blocks=1 00:19:47.603 --rc geninfo_unexecuted_blocks=1 00:19:47.603 00:19:47.603 ' 00:19:47.603 16:39:23 -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:47.603 16:39:23 -- nvmf/common.sh@7 -- # uname -s 00:19:47.603 16:39:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:47.603 16:39:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:47.603 16:39:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:47.603 16:39:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:47.603 16:39:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:47.603 16:39:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:47.603 16:39:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:47.603 16:39:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:47.603 16:39:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:47.603 16:39:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:47.603 16:39:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:19:47.603 16:39:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:19:47.603 16:39:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:47.603 16:39:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:47.603 16:39:23 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:47.603 16:39:23 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:47.603 16:39:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:47.603 16:39:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:47.603 16:39:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:47.603 16:39:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.603 16:39:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.603 16:39:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.603 16:39:23 -- paths/export.sh@5 -- # export PATH 00:19:47.603 16:39:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.603 16:39:23 -- nvmf/common.sh@46 -- # : 0 00:19:47.603 16:39:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:47.603 16:39:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:47.603 16:39:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:47.603 16:39:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:47.603 16:39:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:47.603 16:39:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:47.603 16:39:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:47.603 16:39:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:47.603 16:39:23 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:47.603 16:39:23 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:47.603 16:39:23 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:19:47.603 16:39:23 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:19:47.603 16:39:23 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:47.603 16:39:23 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:19:47.603 16:39:23 -- host/multicontroller.sh@23 -- # nvmftestinit 00:19:47.603 16:39:23 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:47.603 16:39:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:47.603 16:39:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:47.603 16:39:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:47.603 16:39:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:47.603 16:39:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:47.603 16:39:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:47.603 16:39:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:47.603 16:39:23 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:47.603 16:39:23 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:47.603 16:39:23 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:47.603 16:39:23 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:47.603 16:39:23 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:47.603 16:39:23 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:47.603 16:39:23 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:47.603 16:39:23 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:47.603 16:39:23 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:47.603 16:39:23 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:47.603 16:39:23 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:47.603 16:39:23 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:47.603 16:39:23 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:47.603 16:39:23 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:47.603 16:39:23 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:47.603 16:39:23 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:47.603 16:39:23 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:47.603 16:39:23 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:47.603 16:39:23 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:47.603 16:39:23 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:47.603 Cannot find device "nvmf_tgt_br" 00:19:47.603 16:39:23 -- nvmf/common.sh@154 -- # true 00:19:47.603 16:39:23 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:47.603 Cannot find device "nvmf_tgt_br2" 00:19:47.603 16:39:23 -- nvmf/common.sh@155 -- # true 00:19:47.603 16:39:23 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:47.603 16:39:23 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:47.603 Cannot find device "nvmf_tgt_br" 00:19:47.603 16:39:23 -- nvmf/common.sh@157 -- # true 00:19:47.603 16:39:23 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:47.603 Cannot find device "nvmf_tgt_br2" 00:19:47.603 16:39:23 -- nvmf/common.sh@158 -- # true 00:19:47.603 16:39:23 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:47.603 16:39:23 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:47.603 16:39:24 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:47.603 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:47.603 16:39:24 -- nvmf/common.sh@161 -- # true 00:19:47.603 16:39:24 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:47.603 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:47.603 16:39:24 -- nvmf/common.sh@162 -- # true 00:19:47.603 16:39:24 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:47.603 16:39:24 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:47.603 16:39:24 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:47.603 16:39:24 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:47.603 16:39:24 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:47.603 16:39:24 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:47.603 16:39:24 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:47.603 16:39:24 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:47.603 16:39:24 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:47.603 16:39:24 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:47.603 16:39:24 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:47.603 16:39:24 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:47.603 16:39:24 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:47.603 16:39:24 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:47.603 16:39:24 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:47.603 16:39:24 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:47.603 16:39:24 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:47.603 16:39:24 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:47.603 16:39:24 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:47.603 16:39:24 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:47.603 16:39:24 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:47.603 16:39:24 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:47.603 16:39:24 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:47.603 16:39:24 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:47.603 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:47.603 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.102 ms 00:19:47.603 00:19:47.603 --- 10.0.0.2 ping statistics --- 00:19:47.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:47.604 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:19:47.604 16:39:24 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:47.604 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:47.604 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:19:47.604 00:19:47.604 --- 10.0.0.3 ping statistics --- 00:19:47.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:47.604 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:19:47.604 16:39:24 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:47.604 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:47.604 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 00:19:47.604 00:19:47.604 --- 10.0.0.1 ping statistics --- 00:19:47.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:47.604 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:19:47.604 16:39:24 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:47.604 16:39:24 -- nvmf/common.sh@421 -- # return 0 00:19:47.604 16:39:24 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:47.604 16:39:24 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:47.604 16:39:24 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:47.604 16:39:24 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:47.604 16:39:24 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:47.604 16:39:24 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:47.604 16:39:24 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:47.604 16:39:24 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:19:47.604 16:39:24 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:47.604 16:39:24 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:47.604 16:39:24 -- common/autotest_common.sh@10 -- # set +x 00:19:47.604 16:39:24 -- nvmf/common.sh@469 -- # nvmfpid=92845 00:19:47.604 16:39:24 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:47.604 16:39:24 -- nvmf/common.sh@470 -- # waitforlisten 92845 00:19:47.604 16:39:24 -- common/autotest_common.sh@829 -- # '[' -z 92845 ']' 00:19:47.604 16:39:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:47.604 16:39:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:47.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:47.604 16:39:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:47.604 16:39:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:47.604 16:39:24 -- common/autotest_common.sh@10 -- # set +x 00:19:47.604 [2024-11-16 16:39:24.328452] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:47.604 [2024-11-16 16:39:24.328562] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:47.604 [2024-11-16 16:39:24.464767] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:47.604 [2024-11-16 16:39:24.527401] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:47.604 [2024-11-16 16:39:24.527575] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:47.604 [2024-11-16 16:39:24.527588] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:47.604 [2024-11-16 16:39:24.527597] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:47.604 [2024-11-16 16:39:24.528181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:47.604 [2024-11-16 16:39:24.528341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:47.604 [2024-11-16 16:39:24.528411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:47.862 16:39:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:47.862 16:39:25 -- common/autotest_common.sh@862 -- # return 0 00:19:47.862 16:39:25 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:47.862 16:39:25 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:47.862 16:39:25 -- common/autotest_common.sh@10 -- # set +x 00:19:47.862 16:39:25 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:47.862 16:39:25 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:47.862 16:39:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.862 16:39:25 -- common/autotest_common.sh@10 -- # set +x 00:19:47.862 [2024-11-16 16:39:25.327368] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:47.862 16:39:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.862 16:39:25 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:47.862 16:39:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.862 16:39:25 -- common/autotest_common.sh@10 -- # set +x 00:19:48.121 Malloc0 00:19:48.121 16:39:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.121 16:39:25 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:48.121 16:39:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.121 16:39:25 -- common/autotest_common.sh@10 -- # set +x 00:19:48.121 16:39:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.121 16:39:25 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:48.121 16:39:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.121 16:39:25 -- common/autotest_common.sh@10 -- # set +x 00:19:48.121 16:39:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.121 16:39:25 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:48.121 16:39:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.121 16:39:25 -- common/autotest_common.sh@10 -- # set +x 00:19:48.121 [2024-11-16 16:39:25.391335] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:48.121 16:39:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.121 16:39:25 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:48.121 16:39:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.121 16:39:25 -- common/autotest_common.sh@10 -- # set +x 00:19:48.121 [2024-11-16 16:39:25.399276] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:48.121 16:39:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.121 16:39:25 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:48.121 16:39:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.121 16:39:25 -- common/autotest_common.sh@10 -- # set +x 00:19:48.121 Malloc1 00:19:48.121 16:39:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.121 16:39:25 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:19:48.121 16:39:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.121 16:39:25 -- common/autotest_common.sh@10 -- # set +x 00:19:48.121 16:39:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.121 16:39:25 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:19:48.121 16:39:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.121 16:39:25 -- common/autotest_common.sh@10 -- # set +x 00:19:48.121 16:39:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.121 16:39:25 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:19:48.121 16:39:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.121 16:39:25 -- common/autotest_common.sh@10 -- # set +x 00:19:48.121 16:39:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.121 16:39:25 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:19:48.121 16:39:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.121 16:39:25 -- common/autotest_common.sh@10 -- # set +x 00:19:48.121 16:39:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.121 16:39:25 -- host/multicontroller.sh@44 -- # bdevperf_pid=92897 00:19:48.121 16:39:25 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:48.121 16:39:25 -- host/multicontroller.sh@47 -- # waitforlisten 92897 /var/tmp/bdevperf.sock 00:19:48.121 16:39:25 -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:19:48.121 16:39:25 -- common/autotest_common.sh@829 -- # '[' -z 92897 ']' 00:19:48.121 16:39:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:48.121 16:39:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:48.121 16:39:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:48.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:48.121 16:39:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:48.121 16:39:25 -- common/autotest_common.sh@10 -- # set +x 00:19:49.058 16:39:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:49.058 16:39:26 -- common/autotest_common.sh@862 -- # return 0 00:19:49.058 16:39:26 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:19:49.058 16:39:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.058 16:39:26 -- common/autotest_common.sh@10 -- # set +x 00:19:49.058 NVMe0n1 00:19:49.058 16:39:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.058 16:39:26 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:49.058 16:39:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.058 16:39:26 -- common/autotest_common.sh@10 -- # set +x 00:19:49.058 16:39:26 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:19:49.058 16:39:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.058 1 00:19:49.058 16:39:26 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:19:49.058 16:39:26 -- common/autotest_common.sh@650 -- # local es=0 00:19:49.058 16:39:26 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:19:49.058 16:39:26 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:49.058 16:39:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:49.058 16:39:26 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:49.058 16:39:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:49.058 16:39:26 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:19:49.058 16:39:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.058 16:39:26 -- common/autotest_common.sh@10 -- # set +x 00:19:49.058 2024/11/16 16:39:26 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostnqn:nqn.2021-09-7.io.spdk:00001 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:19:49.058 request: 00:19:49.058 { 00:19:49.058 "method": "bdev_nvme_attach_controller", 00:19:49.058 "params": { 00:19:49.059 "name": "NVMe0", 00:19:49.059 "trtype": "tcp", 00:19:49.059 "traddr": "10.0.0.2", 00:19:49.059 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:19:49.059 "hostaddr": "10.0.0.2", 00:19:49.059 "hostsvcid": "60000", 00:19:49.059 "adrfam": "ipv4", 00:19:49.059 "trsvcid": "4420", 00:19:49.059 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:19:49.059 } 00:19:49.059 } 00:19:49.059 Got JSON-RPC error response 00:19:49.059 GoRPCClient: error on JSON-RPC call 00:19:49.059 16:39:26 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:49.059 16:39:26 -- common/autotest_common.sh@653 -- # es=1 00:19:49.059 16:39:26 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:49.059 16:39:26 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:49.059 16:39:26 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:49.059 16:39:26 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:19:49.059 16:39:26 -- common/autotest_common.sh@650 -- # local es=0 00:19:49.059 16:39:26 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:19:49.059 16:39:26 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:49.059 16:39:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:49.059 16:39:26 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:49.059 16:39:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:49.059 16:39:26 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:19:49.059 16:39:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.059 16:39:26 -- common/autotest_common.sh@10 -- # set +x 00:19:49.059 2024/11/16 16:39:26 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:19:49.059 request: 00:19:49.059 { 00:19:49.059 "method": "bdev_nvme_attach_controller", 00:19:49.059 "params": { 00:19:49.059 "name": "NVMe0", 00:19:49.059 "trtype": "tcp", 00:19:49.059 "traddr": "10.0.0.2", 00:19:49.059 "hostaddr": "10.0.0.2", 00:19:49.059 "hostsvcid": "60000", 00:19:49.059 "adrfam": "ipv4", 00:19:49.059 "trsvcid": "4420", 00:19:49.059 "subnqn": "nqn.2016-06.io.spdk:cnode2" 00:19:49.059 } 00:19:49.059 } 00:19:49.059 Got JSON-RPC error response 00:19:49.059 GoRPCClient: error on JSON-RPC call 00:19:49.059 16:39:26 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:49.059 16:39:26 -- common/autotest_common.sh@653 -- # es=1 00:19:49.059 16:39:26 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:49.059 16:39:26 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:49.059 16:39:26 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:49.059 16:39:26 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:19:49.059 16:39:26 -- common/autotest_common.sh@650 -- # local es=0 00:19:49.059 16:39:26 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:19:49.059 16:39:26 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:49.059 16:39:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:49.059 16:39:26 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:49.059 16:39:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:49.059 16:39:26 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:19:49.059 16:39:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.059 16:39:26 -- common/autotest_common.sh@10 -- # set +x 00:19:49.059 2024/11/16 16:39:26 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:disable name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:19:49.059 request: 00:19:49.059 { 00:19:49.059 "method": "bdev_nvme_attach_controller", 00:19:49.059 "params": { 00:19:49.059 "name": "NVMe0", 00:19:49.059 "trtype": "tcp", 00:19:49.059 "traddr": "10.0.0.2", 00:19:49.059 "hostaddr": "10.0.0.2", 00:19:49.059 "hostsvcid": "60000", 00:19:49.059 "adrfam": "ipv4", 00:19:49.059 "trsvcid": "4420", 00:19:49.059 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:49.059 "multipath": "disable" 00:19:49.059 } 00:19:49.059 } 00:19:49.059 Got JSON-RPC error response 00:19:49.059 GoRPCClient: error on JSON-RPC call 00:19:49.059 16:39:26 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:49.059 16:39:26 -- common/autotest_common.sh@653 -- # es=1 00:19:49.059 16:39:26 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:49.059 16:39:26 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:49.059 16:39:26 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:49.059 16:39:26 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:19:49.059 16:39:26 -- common/autotest_common.sh@650 -- # local es=0 00:19:49.059 16:39:26 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:19:49.059 16:39:26 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:49.059 16:39:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:49.059 16:39:26 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:49.317 16:39:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:49.317 16:39:26 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:19:49.317 16:39:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.317 16:39:26 -- common/autotest_common.sh@10 -- # set +x 00:19:49.317 2024/11/16 16:39:26 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:failover name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:19:49.317 request: 00:19:49.317 { 00:19:49.317 "method": "bdev_nvme_attach_controller", 00:19:49.317 "params": { 00:19:49.317 "name": "NVMe0", 00:19:49.317 "trtype": "tcp", 00:19:49.317 "traddr": "10.0.0.2", 00:19:49.318 "hostaddr": "10.0.0.2", 00:19:49.318 "hostsvcid": "60000", 00:19:49.318 "adrfam": "ipv4", 00:19:49.318 "trsvcid": "4420", 00:19:49.318 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:49.318 "multipath": "failover" 00:19:49.318 } 00:19:49.318 } 00:19:49.318 Got JSON-RPC error response 00:19:49.318 GoRPCClient: error on JSON-RPC call 00:19:49.318 16:39:26 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:49.318 16:39:26 -- common/autotest_common.sh@653 -- # es=1 00:19:49.318 16:39:26 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:49.318 16:39:26 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:49.318 16:39:26 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:49.318 16:39:26 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:49.318 16:39:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.318 16:39:26 -- common/autotest_common.sh@10 -- # set +x 00:19:49.318 00:19:49.318 16:39:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.318 16:39:26 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:49.318 16:39:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.318 16:39:26 -- common/autotest_common.sh@10 -- # set +x 00:19:49.318 16:39:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.318 16:39:26 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:19:49.318 16:39:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.318 16:39:26 -- common/autotest_common.sh@10 -- # set +x 00:19:49.318 00:19:49.318 16:39:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.318 16:39:26 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:49.318 16:39:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.318 16:39:26 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:19:49.318 16:39:26 -- common/autotest_common.sh@10 -- # set +x 00:19:49.318 16:39:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.318 16:39:26 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:19:49.318 16:39:26 -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:50.693 0 00:19:50.693 16:39:27 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:19:50.693 16:39:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.693 16:39:27 -- common/autotest_common.sh@10 -- # set +x 00:19:50.693 16:39:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.693 16:39:27 -- host/multicontroller.sh@100 -- # killprocess 92897 00:19:50.693 16:39:27 -- common/autotest_common.sh@936 -- # '[' -z 92897 ']' 00:19:50.693 16:39:27 -- common/autotest_common.sh@940 -- # kill -0 92897 00:19:50.693 16:39:27 -- common/autotest_common.sh@941 -- # uname 00:19:50.693 16:39:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:50.693 16:39:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92897 00:19:50.693 16:39:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:50.693 16:39:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:50.693 killing process with pid 92897 00:19:50.693 16:39:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92897' 00:19:50.693 16:39:27 -- common/autotest_common.sh@955 -- # kill 92897 00:19:50.693 16:39:27 -- common/autotest_common.sh@960 -- # wait 92897 00:19:50.693 16:39:28 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:50.693 16:39:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.693 16:39:28 -- common/autotest_common.sh@10 -- # set +x 00:19:50.952 16:39:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.953 16:39:28 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:50.953 16:39:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.953 16:39:28 -- common/autotest_common.sh@10 -- # set +x 00:19:50.953 16:39:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.953 16:39:28 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:19:50.953 16:39:28 -- host/multicontroller.sh@107 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:50.953 16:39:28 -- common/autotest_common.sh@1607 -- # read -r file 00:19:50.953 16:39:28 -- common/autotest_common.sh@1606 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:19:50.953 16:39:28 -- common/autotest_common.sh@1606 -- # sort -u 00:19:50.953 16:39:28 -- common/autotest_common.sh@1608 -- # cat 00:19:50.953 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:19:50.953 [2024-11-16 16:39:25.512810] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:50.953 [2024-11-16 16:39:25.512915] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92897 ] 00:19:50.953 [2024-11-16 16:39:25.654713] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.953 [2024-11-16 16:39:25.732363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:50.953 [2024-11-16 16:39:26.696676] bdev.c:4553:bdev_name_add: *ERROR*: Bdev name ba25ee23-7136-4636-b8fa-c43cb81fbb76 already exists 00:19:50.953 [2024-11-16 16:39:26.696747] bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:ba25ee23-7136-4636-b8fa-c43cb81fbb76 alias for bdev NVMe1n1 00:19:50.953 [2024-11-16 16:39:26.696794] bdev_nvme.c:4236:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:19:50.953 Running I/O for 1 seconds... 00:19:50.953 00:19:50.953 Latency(us) 00:19:50.953 [2024-11-16T16:39:28.444Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:50.953 [2024-11-16T16:39:28.444Z] Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:19:50.953 NVMe0n1 : 1.00 22934.63 89.59 0.00 0.00 5568.19 3038.49 12630.57 00:19:50.953 [2024-11-16T16:39:28.444Z] =================================================================================================================== 00:19:50.953 [2024-11-16T16:39:28.444Z] Total : 22934.63 89.59 0.00 0.00 5568.19 3038.49 12630.57 00:19:50.953 Received shutdown signal, test time was about 1.000000 seconds 00:19:50.953 00:19:50.953 Latency(us) 00:19:50.953 [2024-11-16T16:39:28.444Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:50.953 [2024-11-16T16:39:28.444Z] =================================================================================================================== 00:19:50.953 [2024-11-16T16:39:28.444Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:50.953 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:19:50.953 16:39:28 -- common/autotest_common.sh@1613 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:50.953 16:39:28 -- common/autotest_common.sh@1607 -- # read -r file 00:19:50.953 16:39:28 -- host/multicontroller.sh@108 -- # nvmftestfini 00:19:50.953 16:39:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:50.953 16:39:28 -- nvmf/common.sh@116 -- # sync 00:19:50.953 16:39:28 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:50.953 16:39:28 -- nvmf/common.sh@119 -- # set +e 00:19:50.953 16:39:28 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:50.953 16:39:28 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:50.953 rmmod nvme_tcp 00:19:50.953 rmmod nvme_fabrics 00:19:50.953 rmmod nvme_keyring 00:19:50.953 16:39:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:50.953 16:39:28 -- nvmf/common.sh@123 -- # set -e 00:19:50.953 16:39:28 -- nvmf/common.sh@124 -- # return 0 00:19:50.953 16:39:28 -- nvmf/common.sh@477 -- # '[' -n 92845 ']' 00:19:50.953 16:39:28 -- nvmf/common.sh@478 -- # killprocess 92845 00:19:50.953 16:39:28 -- common/autotest_common.sh@936 -- # '[' -z 92845 ']' 00:19:50.953 16:39:28 -- common/autotest_common.sh@940 -- # kill -0 92845 00:19:50.953 16:39:28 -- common/autotest_common.sh@941 -- # uname 00:19:50.953 16:39:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:50.953 16:39:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92845 00:19:50.953 16:39:28 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:50.953 16:39:28 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:50.953 16:39:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92845' 00:19:50.953 killing process with pid 92845 00:19:50.953 16:39:28 -- common/autotest_common.sh@955 -- # kill 92845 00:19:50.953 16:39:28 -- common/autotest_common.sh@960 -- # wait 92845 00:19:51.212 16:39:28 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:51.212 16:39:28 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:51.212 16:39:28 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:51.212 16:39:28 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:51.212 16:39:28 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:51.212 16:39:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:51.212 16:39:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:51.212 16:39:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:51.212 16:39:28 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:51.212 00:19:51.212 real 0m4.957s 00:19:51.212 user 0m15.254s 00:19:51.212 sys 0m1.113s 00:19:51.212 16:39:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:51.212 16:39:28 -- common/autotest_common.sh@10 -- # set +x 00:19:51.212 ************************************ 00:19:51.212 END TEST nvmf_multicontroller 00:19:51.212 ************************************ 00:19:51.212 16:39:28 -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:19:51.212 16:39:28 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:51.212 16:39:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:51.212 16:39:28 -- common/autotest_common.sh@10 -- # set +x 00:19:51.212 ************************************ 00:19:51.212 START TEST nvmf_aer 00:19:51.212 ************************************ 00:19:51.212 16:39:28 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:19:51.471 * Looking for test storage... 00:19:51.471 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:51.471 16:39:28 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:51.471 16:39:28 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:51.471 16:39:28 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:51.471 16:39:28 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:51.471 16:39:28 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:51.471 16:39:28 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:51.471 16:39:28 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:51.471 16:39:28 -- scripts/common.sh@335 -- # IFS=.-: 00:19:51.471 16:39:28 -- scripts/common.sh@335 -- # read -ra ver1 00:19:51.471 16:39:28 -- scripts/common.sh@336 -- # IFS=.-: 00:19:51.471 16:39:28 -- scripts/common.sh@336 -- # read -ra ver2 00:19:51.471 16:39:28 -- scripts/common.sh@337 -- # local 'op=<' 00:19:51.471 16:39:28 -- scripts/common.sh@339 -- # ver1_l=2 00:19:51.471 16:39:28 -- scripts/common.sh@340 -- # ver2_l=1 00:19:51.471 16:39:28 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:51.471 16:39:28 -- scripts/common.sh@343 -- # case "$op" in 00:19:51.471 16:39:28 -- scripts/common.sh@344 -- # : 1 00:19:51.471 16:39:28 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:51.471 16:39:28 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:51.471 16:39:28 -- scripts/common.sh@364 -- # decimal 1 00:19:51.471 16:39:28 -- scripts/common.sh@352 -- # local d=1 00:19:51.471 16:39:28 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:51.471 16:39:28 -- scripts/common.sh@354 -- # echo 1 00:19:51.471 16:39:28 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:51.471 16:39:28 -- scripts/common.sh@365 -- # decimal 2 00:19:51.471 16:39:28 -- scripts/common.sh@352 -- # local d=2 00:19:51.471 16:39:28 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:51.471 16:39:28 -- scripts/common.sh@354 -- # echo 2 00:19:51.471 16:39:28 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:51.471 16:39:28 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:51.471 16:39:28 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:51.471 16:39:28 -- scripts/common.sh@367 -- # return 0 00:19:51.471 16:39:28 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:51.471 16:39:28 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:51.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:51.471 --rc genhtml_branch_coverage=1 00:19:51.471 --rc genhtml_function_coverage=1 00:19:51.471 --rc genhtml_legend=1 00:19:51.471 --rc geninfo_all_blocks=1 00:19:51.471 --rc geninfo_unexecuted_blocks=1 00:19:51.471 00:19:51.471 ' 00:19:51.471 16:39:28 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:51.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:51.471 --rc genhtml_branch_coverage=1 00:19:51.471 --rc genhtml_function_coverage=1 00:19:51.471 --rc genhtml_legend=1 00:19:51.471 --rc geninfo_all_blocks=1 00:19:51.471 --rc geninfo_unexecuted_blocks=1 00:19:51.471 00:19:51.471 ' 00:19:51.471 16:39:28 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:51.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:51.471 --rc genhtml_branch_coverage=1 00:19:51.471 --rc genhtml_function_coverage=1 00:19:51.471 --rc genhtml_legend=1 00:19:51.471 --rc geninfo_all_blocks=1 00:19:51.471 --rc geninfo_unexecuted_blocks=1 00:19:51.471 00:19:51.471 ' 00:19:51.471 16:39:28 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:51.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:51.471 --rc genhtml_branch_coverage=1 00:19:51.471 --rc genhtml_function_coverage=1 00:19:51.471 --rc genhtml_legend=1 00:19:51.471 --rc geninfo_all_blocks=1 00:19:51.471 --rc geninfo_unexecuted_blocks=1 00:19:51.471 00:19:51.471 ' 00:19:51.471 16:39:28 -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:51.471 16:39:28 -- nvmf/common.sh@7 -- # uname -s 00:19:51.471 16:39:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:51.471 16:39:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:51.471 16:39:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:51.471 16:39:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:51.471 16:39:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:51.471 16:39:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:51.471 16:39:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:51.471 16:39:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:51.471 16:39:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:51.471 16:39:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:51.471 16:39:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:19:51.471 16:39:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:19:51.471 16:39:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:51.471 16:39:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:51.471 16:39:28 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:51.471 16:39:28 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:51.471 16:39:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:51.471 16:39:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:51.471 16:39:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:51.471 16:39:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.471 16:39:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.471 16:39:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.471 16:39:28 -- paths/export.sh@5 -- # export PATH 00:19:51.471 16:39:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.471 16:39:28 -- nvmf/common.sh@46 -- # : 0 00:19:51.472 16:39:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:51.472 16:39:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:51.472 16:39:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:51.472 16:39:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:51.472 16:39:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:51.472 16:39:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:51.472 16:39:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:51.472 16:39:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:51.472 16:39:28 -- host/aer.sh@11 -- # nvmftestinit 00:19:51.472 16:39:28 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:51.472 16:39:28 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:51.472 16:39:28 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:51.472 16:39:28 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:51.472 16:39:28 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:51.472 16:39:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:51.472 16:39:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:51.472 16:39:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:51.472 16:39:28 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:51.472 16:39:28 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:51.472 16:39:28 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:51.472 16:39:28 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:51.472 16:39:28 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:51.472 16:39:28 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:51.472 16:39:28 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:51.472 16:39:28 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:51.472 16:39:28 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:51.472 16:39:28 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:51.472 16:39:28 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:51.472 16:39:28 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:51.472 16:39:28 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:51.472 16:39:28 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:51.472 16:39:28 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:51.472 16:39:28 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:51.472 16:39:28 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:51.472 16:39:28 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:51.472 16:39:28 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:51.472 16:39:28 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:51.472 Cannot find device "nvmf_tgt_br" 00:19:51.472 16:39:28 -- nvmf/common.sh@154 -- # true 00:19:51.472 16:39:28 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:51.472 Cannot find device "nvmf_tgt_br2" 00:19:51.472 16:39:28 -- nvmf/common.sh@155 -- # true 00:19:51.472 16:39:28 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:51.472 16:39:28 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:51.472 Cannot find device "nvmf_tgt_br" 00:19:51.472 16:39:28 -- nvmf/common.sh@157 -- # true 00:19:51.472 16:39:28 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:51.472 Cannot find device "nvmf_tgt_br2" 00:19:51.472 16:39:28 -- nvmf/common.sh@158 -- # true 00:19:51.472 16:39:28 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:51.759 16:39:28 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:51.759 16:39:29 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:51.759 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:51.759 16:39:29 -- nvmf/common.sh@161 -- # true 00:19:51.759 16:39:29 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:51.759 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:51.759 16:39:29 -- nvmf/common.sh@162 -- # true 00:19:51.759 16:39:29 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:51.759 16:39:29 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:51.759 16:39:29 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:51.759 16:39:29 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:51.759 16:39:29 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:51.759 16:39:29 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:51.759 16:39:29 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:51.759 16:39:29 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:51.759 16:39:29 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:51.759 16:39:29 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:51.759 16:39:29 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:51.759 16:39:29 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:51.759 16:39:29 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:51.759 16:39:29 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:51.759 16:39:29 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:51.759 16:39:29 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:51.759 16:39:29 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:51.759 16:39:29 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:51.759 16:39:29 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:51.759 16:39:29 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:51.759 16:39:29 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:51.759 16:39:29 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:51.759 16:39:29 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:51.759 16:39:29 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:51.759 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:51.759 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:19:51.759 00:19:51.759 --- 10.0.0.2 ping statistics --- 00:19:51.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:51.759 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:19:51.759 16:39:29 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:51.759 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:51.759 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.118 ms 00:19:51.759 00:19:51.759 --- 10.0.0.3 ping statistics --- 00:19:51.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:51.759 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:19:51.759 16:39:29 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:51.759 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:51.759 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:19:51.759 00:19:51.759 --- 10.0.0.1 ping statistics --- 00:19:51.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:51.759 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:19:51.759 16:39:29 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:51.759 16:39:29 -- nvmf/common.sh@421 -- # return 0 00:19:51.759 16:39:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:51.759 16:39:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:51.759 16:39:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:51.760 16:39:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:51.760 16:39:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:51.760 16:39:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:51.760 16:39:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:51.760 16:39:29 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:19:51.760 16:39:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:51.760 16:39:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:51.760 16:39:29 -- common/autotest_common.sh@10 -- # set +x 00:19:51.760 16:39:29 -- nvmf/common.sh@469 -- # nvmfpid=93156 00:19:51.760 16:39:29 -- nvmf/common.sh@470 -- # waitforlisten 93156 00:19:51.760 16:39:29 -- common/autotest_common.sh@829 -- # '[' -z 93156 ']' 00:19:51.760 16:39:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:51.760 16:39:29 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:51.760 16:39:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:51.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:51.760 16:39:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:51.760 16:39:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:51.760 16:39:29 -- common/autotest_common.sh@10 -- # set +x 00:19:52.017 [2024-11-16 16:39:29.302642] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:52.017 [2024-11-16 16:39:29.302763] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:52.017 [2024-11-16 16:39:29.443372] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:52.275 [2024-11-16 16:39:29.513259] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:52.275 [2024-11-16 16:39:29.513436] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:52.275 [2024-11-16 16:39:29.513452] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:52.275 [2024-11-16 16:39:29.513461] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:52.275 [2024-11-16 16:39:29.513634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:52.275 [2024-11-16 16:39:29.514242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:52.275 [2024-11-16 16:39:29.514426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:52.275 [2024-11-16 16:39:29.514448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:52.843 16:39:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:52.843 16:39:30 -- common/autotest_common.sh@862 -- # return 0 00:19:52.843 16:39:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:52.843 16:39:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:52.843 16:39:30 -- common/autotest_common.sh@10 -- # set +x 00:19:52.843 16:39:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:52.843 16:39:30 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:52.843 16:39:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.843 16:39:30 -- common/autotest_common.sh@10 -- # set +x 00:19:52.843 [2024-11-16 16:39:30.282524] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:52.843 16:39:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.843 16:39:30 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:19:52.843 16:39:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.843 16:39:30 -- common/autotest_common.sh@10 -- # set +x 00:19:53.101 Malloc0 00:19:53.101 16:39:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.101 16:39:30 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:19:53.101 16:39:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.101 16:39:30 -- common/autotest_common.sh@10 -- # set +x 00:19:53.101 16:39:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.101 16:39:30 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:53.101 16:39:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.101 16:39:30 -- common/autotest_common.sh@10 -- # set +x 00:19:53.101 16:39:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.101 16:39:30 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:53.101 16:39:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.101 16:39:30 -- common/autotest_common.sh@10 -- # set +x 00:19:53.101 [2024-11-16 16:39:30.361867] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:53.101 16:39:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.101 16:39:30 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:19:53.101 16:39:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.101 16:39:30 -- common/autotest_common.sh@10 -- # set +x 00:19:53.101 [2024-11-16 16:39:30.373601] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:19:53.101 [ 00:19:53.101 { 00:19:53.101 "allow_any_host": true, 00:19:53.101 "hosts": [], 00:19:53.101 "listen_addresses": [], 00:19:53.101 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:53.101 "subtype": "Discovery" 00:19:53.101 }, 00:19:53.101 { 00:19:53.101 "allow_any_host": true, 00:19:53.101 "hosts": [], 00:19:53.101 "listen_addresses": [ 00:19:53.101 { 00:19:53.101 "adrfam": "IPv4", 00:19:53.101 "traddr": "10.0.0.2", 00:19:53.101 "transport": "TCP", 00:19:53.101 "trsvcid": "4420", 00:19:53.101 "trtype": "TCP" 00:19:53.101 } 00:19:53.101 ], 00:19:53.101 "max_cntlid": 65519, 00:19:53.101 "max_namespaces": 2, 00:19:53.101 "min_cntlid": 1, 00:19:53.101 "model_number": "SPDK bdev Controller", 00:19:53.101 "namespaces": [ 00:19:53.101 { 00:19:53.101 "bdev_name": "Malloc0", 00:19:53.101 "name": "Malloc0", 00:19:53.101 "nguid": "3BEFC57DDEC84BD69F2A91E5C6385F2D", 00:19:53.101 "nsid": 1, 00:19:53.101 "uuid": "3befc57d-dec8-4bd6-9f2a-91e5c6385f2d" 00:19:53.101 } 00:19:53.101 ], 00:19:53.101 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:53.101 "serial_number": "SPDK00000000000001", 00:19:53.101 "subtype": "NVMe" 00:19:53.101 } 00:19:53.101 ] 00:19:53.101 16:39:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.101 16:39:30 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:19:53.101 16:39:30 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:19:53.101 16:39:30 -- host/aer.sh@33 -- # aerpid=93210 00:19:53.101 16:39:30 -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:19:53.101 16:39:30 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:19:53.101 16:39:30 -- common/autotest_common.sh@1254 -- # local i=0 00:19:53.101 16:39:30 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:53.101 16:39:30 -- common/autotest_common.sh@1256 -- # '[' 0 -lt 200 ']' 00:19:53.101 16:39:30 -- common/autotest_common.sh@1257 -- # i=1 00:19:53.101 16:39:30 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:19:53.101 16:39:30 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:53.101 16:39:30 -- common/autotest_common.sh@1256 -- # '[' 1 -lt 200 ']' 00:19:53.101 16:39:30 -- common/autotest_common.sh@1257 -- # i=2 00:19:53.101 16:39:30 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:19:53.360 16:39:30 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:53.360 16:39:30 -- common/autotest_common.sh@1261 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:53.360 16:39:30 -- common/autotest_common.sh@1265 -- # return 0 00:19:53.360 16:39:30 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:19:53.360 16:39:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.360 16:39:30 -- common/autotest_common.sh@10 -- # set +x 00:19:53.360 Malloc1 00:19:53.360 16:39:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.360 16:39:30 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:19:53.360 16:39:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.360 16:39:30 -- common/autotest_common.sh@10 -- # set +x 00:19:53.360 16:39:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.360 16:39:30 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:19:53.360 16:39:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.360 16:39:30 -- common/autotest_common.sh@10 -- # set +x 00:19:53.360 Asynchronous Event Request test 00:19:53.360 Attaching to 10.0.0.2 00:19:53.360 Attached to 10.0.0.2 00:19:53.360 Registering asynchronous event callbacks... 00:19:53.360 Starting namespace attribute notice tests for all controllers... 00:19:53.360 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:19:53.360 aer_cb - Changed Namespace 00:19:53.360 Cleaning up... 00:19:53.360 [ 00:19:53.360 { 00:19:53.360 "allow_any_host": true, 00:19:53.360 "hosts": [], 00:19:53.360 "listen_addresses": [], 00:19:53.360 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:53.360 "subtype": "Discovery" 00:19:53.360 }, 00:19:53.360 { 00:19:53.360 "allow_any_host": true, 00:19:53.360 "hosts": [], 00:19:53.360 "listen_addresses": [ 00:19:53.360 { 00:19:53.360 "adrfam": "IPv4", 00:19:53.360 "traddr": "10.0.0.2", 00:19:53.360 "transport": "TCP", 00:19:53.360 "trsvcid": "4420", 00:19:53.360 "trtype": "TCP" 00:19:53.360 } 00:19:53.360 ], 00:19:53.360 "max_cntlid": 65519, 00:19:53.360 "max_namespaces": 2, 00:19:53.360 "min_cntlid": 1, 00:19:53.360 "model_number": "SPDK bdev Controller", 00:19:53.360 "namespaces": [ 00:19:53.360 { 00:19:53.360 "bdev_name": "Malloc0", 00:19:53.360 "name": "Malloc0", 00:19:53.360 "nguid": "3BEFC57DDEC84BD69F2A91E5C6385F2D", 00:19:53.360 "nsid": 1, 00:19:53.360 "uuid": "3befc57d-dec8-4bd6-9f2a-91e5c6385f2d" 00:19:53.360 }, 00:19:53.360 { 00:19:53.360 "bdev_name": "Malloc1", 00:19:53.360 "name": "Malloc1", 00:19:53.360 "nguid": "7A7933F5135B41559EA14B38641D85E4", 00:19:53.360 "nsid": 2, 00:19:53.360 "uuid": "7a7933f5-135b-4155-9ea1-4b38641d85e4" 00:19:53.360 } 00:19:53.360 ], 00:19:53.360 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:53.360 "serial_number": "SPDK00000000000001", 00:19:53.360 "subtype": "NVMe" 00:19:53.360 } 00:19:53.360 ] 00:19:53.360 16:39:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.360 16:39:30 -- host/aer.sh@43 -- # wait 93210 00:19:53.360 16:39:30 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:19:53.360 16:39:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.360 16:39:30 -- common/autotest_common.sh@10 -- # set +x 00:19:53.360 16:39:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.360 16:39:30 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:19:53.360 16:39:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.361 16:39:30 -- common/autotest_common.sh@10 -- # set +x 00:19:53.361 16:39:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.361 16:39:30 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:53.361 16:39:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.361 16:39:30 -- common/autotest_common.sh@10 -- # set +x 00:19:53.361 16:39:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.361 16:39:30 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:19:53.361 16:39:30 -- host/aer.sh@51 -- # nvmftestfini 00:19:53.361 16:39:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:53.361 16:39:30 -- nvmf/common.sh@116 -- # sync 00:19:53.361 16:39:30 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:53.361 16:39:30 -- nvmf/common.sh@119 -- # set +e 00:19:53.619 16:39:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:53.619 16:39:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:53.619 rmmod nvme_tcp 00:19:53.619 rmmod nvme_fabrics 00:19:53.619 rmmod nvme_keyring 00:19:53.619 16:39:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:53.619 16:39:30 -- nvmf/common.sh@123 -- # set -e 00:19:53.619 16:39:30 -- nvmf/common.sh@124 -- # return 0 00:19:53.619 16:39:30 -- nvmf/common.sh@477 -- # '[' -n 93156 ']' 00:19:53.619 16:39:30 -- nvmf/common.sh@478 -- # killprocess 93156 00:19:53.619 16:39:30 -- common/autotest_common.sh@936 -- # '[' -z 93156 ']' 00:19:53.619 16:39:30 -- common/autotest_common.sh@940 -- # kill -0 93156 00:19:53.619 16:39:30 -- common/autotest_common.sh@941 -- # uname 00:19:53.619 16:39:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:53.619 16:39:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 93156 00:19:53.619 16:39:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:53.619 16:39:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:53.619 16:39:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 93156' 00:19:53.619 killing process with pid 93156 00:19:53.619 16:39:30 -- common/autotest_common.sh@955 -- # kill 93156 00:19:53.619 [2024-11-16 16:39:30.947608] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:19:53.619 16:39:30 -- common/autotest_common.sh@960 -- # wait 93156 00:19:53.878 16:39:31 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:53.878 16:39:31 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:53.878 16:39:31 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:53.878 16:39:31 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:53.878 16:39:31 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:53.878 16:39:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:53.878 16:39:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:53.878 16:39:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:53.878 16:39:31 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:53.878 00:19:53.878 real 0m2.555s 00:19:53.878 user 0m6.878s 00:19:53.878 sys 0m0.735s 00:19:53.878 16:39:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:53.878 16:39:31 -- common/autotest_common.sh@10 -- # set +x 00:19:53.878 ************************************ 00:19:53.878 END TEST nvmf_aer 00:19:53.878 ************************************ 00:19:53.878 16:39:31 -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:19:53.878 16:39:31 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:53.878 16:39:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:53.878 16:39:31 -- common/autotest_common.sh@10 -- # set +x 00:19:53.878 ************************************ 00:19:53.878 START TEST nvmf_async_init 00:19:53.878 ************************************ 00:19:53.878 16:39:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:19:53.878 * Looking for test storage... 00:19:54.137 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:54.137 16:39:31 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:54.137 16:39:31 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:54.137 16:39:31 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:54.137 16:39:31 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:54.137 16:39:31 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:54.137 16:39:31 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:54.137 16:39:31 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:54.137 16:39:31 -- scripts/common.sh@335 -- # IFS=.-: 00:19:54.137 16:39:31 -- scripts/common.sh@335 -- # read -ra ver1 00:19:54.137 16:39:31 -- scripts/common.sh@336 -- # IFS=.-: 00:19:54.137 16:39:31 -- scripts/common.sh@336 -- # read -ra ver2 00:19:54.137 16:39:31 -- scripts/common.sh@337 -- # local 'op=<' 00:19:54.137 16:39:31 -- scripts/common.sh@339 -- # ver1_l=2 00:19:54.137 16:39:31 -- scripts/common.sh@340 -- # ver2_l=1 00:19:54.137 16:39:31 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:54.137 16:39:31 -- scripts/common.sh@343 -- # case "$op" in 00:19:54.137 16:39:31 -- scripts/common.sh@344 -- # : 1 00:19:54.137 16:39:31 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:54.137 16:39:31 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:54.137 16:39:31 -- scripts/common.sh@364 -- # decimal 1 00:19:54.137 16:39:31 -- scripts/common.sh@352 -- # local d=1 00:19:54.137 16:39:31 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:54.137 16:39:31 -- scripts/common.sh@354 -- # echo 1 00:19:54.137 16:39:31 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:54.137 16:39:31 -- scripts/common.sh@365 -- # decimal 2 00:19:54.137 16:39:31 -- scripts/common.sh@352 -- # local d=2 00:19:54.137 16:39:31 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:54.137 16:39:31 -- scripts/common.sh@354 -- # echo 2 00:19:54.137 16:39:31 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:54.137 16:39:31 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:54.137 16:39:31 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:54.137 16:39:31 -- scripts/common.sh@367 -- # return 0 00:19:54.137 16:39:31 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:54.137 16:39:31 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:54.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:54.137 --rc genhtml_branch_coverage=1 00:19:54.137 --rc genhtml_function_coverage=1 00:19:54.137 --rc genhtml_legend=1 00:19:54.137 --rc geninfo_all_blocks=1 00:19:54.137 --rc geninfo_unexecuted_blocks=1 00:19:54.137 00:19:54.137 ' 00:19:54.138 16:39:31 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:54.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:54.138 --rc genhtml_branch_coverage=1 00:19:54.138 --rc genhtml_function_coverage=1 00:19:54.138 --rc genhtml_legend=1 00:19:54.138 --rc geninfo_all_blocks=1 00:19:54.138 --rc geninfo_unexecuted_blocks=1 00:19:54.138 00:19:54.138 ' 00:19:54.138 16:39:31 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:54.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:54.138 --rc genhtml_branch_coverage=1 00:19:54.138 --rc genhtml_function_coverage=1 00:19:54.138 --rc genhtml_legend=1 00:19:54.138 --rc geninfo_all_blocks=1 00:19:54.138 --rc geninfo_unexecuted_blocks=1 00:19:54.138 00:19:54.138 ' 00:19:54.138 16:39:31 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:54.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:54.138 --rc genhtml_branch_coverage=1 00:19:54.138 --rc genhtml_function_coverage=1 00:19:54.138 --rc genhtml_legend=1 00:19:54.138 --rc geninfo_all_blocks=1 00:19:54.138 --rc geninfo_unexecuted_blocks=1 00:19:54.138 00:19:54.138 ' 00:19:54.138 16:39:31 -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:54.138 16:39:31 -- nvmf/common.sh@7 -- # uname -s 00:19:54.138 16:39:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:54.138 16:39:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:54.138 16:39:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:54.138 16:39:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:54.138 16:39:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:54.138 16:39:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:54.138 16:39:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:54.138 16:39:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:54.138 16:39:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:54.138 16:39:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:54.138 16:39:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:19:54.138 16:39:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:19:54.138 16:39:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:54.138 16:39:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:54.138 16:39:31 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:54.138 16:39:31 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:54.138 16:39:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:54.138 16:39:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:54.138 16:39:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:54.138 16:39:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:54.138 16:39:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:54.138 16:39:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:54.138 16:39:31 -- paths/export.sh@5 -- # export PATH 00:19:54.138 16:39:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:54.138 16:39:31 -- nvmf/common.sh@46 -- # : 0 00:19:54.138 16:39:31 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:54.138 16:39:31 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:54.138 16:39:31 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:54.138 16:39:31 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:54.138 16:39:31 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:54.138 16:39:31 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:54.138 16:39:31 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:54.138 16:39:31 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:54.138 16:39:31 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:19:54.138 16:39:31 -- host/async_init.sh@14 -- # null_block_size=512 00:19:54.138 16:39:31 -- host/async_init.sh@15 -- # null_bdev=null0 00:19:54.138 16:39:31 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:19:54.138 16:39:31 -- host/async_init.sh@20 -- # uuidgen 00:19:54.138 16:39:31 -- host/async_init.sh@20 -- # tr -d - 00:19:54.138 16:39:31 -- host/async_init.sh@20 -- # nguid=1dccf54e391340128ae0f9267d22685a 00:19:54.138 16:39:31 -- host/async_init.sh@22 -- # nvmftestinit 00:19:54.138 16:39:31 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:54.138 16:39:31 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:54.138 16:39:31 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:54.138 16:39:31 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:54.138 16:39:31 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:54.138 16:39:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:54.138 16:39:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:54.138 16:39:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:54.138 16:39:31 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:54.138 16:39:31 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:54.138 16:39:31 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:54.138 16:39:31 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:54.138 16:39:31 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:54.138 16:39:31 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:54.138 16:39:31 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:54.138 16:39:31 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:54.138 16:39:31 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:54.138 16:39:31 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:54.138 16:39:31 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:54.138 16:39:31 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:54.138 16:39:31 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:54.138 16:39:31 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:54.138 16:39:31 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:54.138 16:39:31 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:54.138 16:39:31 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:54.138 16:39:31 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:54.138 16:39:31 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:54.138 16:39:31 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:54.138 Cannot find device "nvmf_tgt_br" 00:19:54.138 16:39:31 -- nvmf/common.sh@154 -- # true 00:19:54.138 16:39:31 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:54.138 Cannot find device "nvmf_tgt_br2" 00:19:54.138 16:39:31 -- nvmf/common.sh@155 -- # true 00:19:54.138 16:39:31 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:54.138 16:39:31 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:54.138 Cannot find device "nvmf_tgt_br" 00:19:54.138 16:39:31 -- nvmf/common.sh@157 -- # true 00:19:54.138 16:39:31 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:54.138 Cannot find device "nvmf_tgt_br2" 00:19:54.138 16:39:31 -- nvmf/common.sh@158 -- # true 00:19:54.138 16:39:31 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:54.138 16:39:31 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:54.138 16:39:31 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:54.138 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:54.138 16:39:31 -- nvmf/common.sh@161 -- # true 00:19:54.138 16:39:31 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:54.139 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:54.139 16:39:31 -- nvmf/common.sh@162 -- # true 00:19:54.139 16:39:31 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:54.139 16:39:31 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:54.397 16:39:31 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:54.397 16:39:31 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:54.397 16:39:31 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:54.397 16:39:31 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:54.397 16:39:31 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:54.397 16:39:31 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:54.397 16:39:31 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:54.397 16:39:31 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:54.397 16:39:31 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:54.397 16:39:31 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:54.397 16:39:31 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:54.397 16:39:31 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:54.397 16:39:31 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:54.397 16:39:31 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:54.397 16:39:31 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:54.397 16:39:31 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:54.397 16:39:31 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:54.397 16:39:31 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:54.397 16:39:31 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:54.397 16:39:31 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:54.397 16:39:31 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:54.397 16:39:31 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:54.397 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:54.397 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:19:54.397 00:19:54.397 --- 10.0.0.2 ping statistics --- 00:19:54.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:54.397 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:19:54.397 16:39:31 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:54.397 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:54.397 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:19:54.397 00:19:54.397 --- 10.0.0.3 ping statistics --- 00:19:54.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:54.397 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:19:54.397 16:39:31 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:54.397 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:54.397 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:19:54.397 00:19:54.397 --- 10.0.0.1 ping statistics --- 00:19:54.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:54.397 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:19:54.397 16:39:31 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:54.397 16:39:31 -- nvmf/common.sh@421 -- # return 0 00:19:54.397 16:39:31 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:54.397 16:39:31 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:54.397 16:39:31 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:54.397 16:39:31 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:54.397 16:39:31 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:54.397 16:39:31 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:54.397 16:39:31 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:54.397 16:39:31 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:19:54.397 16:39:31 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:54.397 16:39:31 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:54.397 16:39:31 -- common/autotest_common.sh@10 -- # set +x 00:19:54.397 16:39:31 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:54.397 16:39:31 -- nvmf/common.sh@469 -- # nvmfpid=93389 00:19:54.397 16:39:31 -- nvmf/common.sh@470 -- # waitforlisten 93389 00:19:54.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:54.397 16:39:31 -- common/autotest_common.sh@829 -- # '[' -z 93389 ']' 00:19:54.397 16:39:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:54.397 16:39:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:54.397 16:39:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:54.397 16:39:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:54.397 16:39:31 -- common/autotest_common.sh@10 -- # set +x 00:19:54.397 [2024-11-16 16:39:31.879431] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:54.397 [2024-11-16 16:39:31.879673] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:54.656 [2024-11-16 16:39:32.019580] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:54.656 [2024-11-16 16:39:32.110129] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:54.656 [2024-11-16 16:39:32.110642] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:54.656 [2024-11-16 16:39:32.110806] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:54.656 [2024-11-16 16:39:32.110976] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:54.656 [2024-11-16 16:39:32.111091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:55.592 16:39:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:55.592 16:39:32 -- common/autotest_common.sh@862 -- # return 0 00:19:55.592 16:39:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:55.592 16:39:32 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:55.592 16:39:32 -- common/autotest_common.sh@10 -- # set +x 00:19:55.592 16:39:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:55.592 16:39:32 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:19:55.592 16:39:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.592 16:39:32 -- common/autotest_common.sh@10 -- # set +x 00:19:55.592 [2024-11-16 16:39:32.923333] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:55.592 16:39:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.592 16:39:32 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:19:55.592 16:39:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.592 16:39:32 -- common/autotest_common.sh@10 -- # set +x 00:19:55.592 null0 00:19:55.592 16:39:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.592 16:39:32 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:19:55.592 16:39:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.592 16:39:32 -- common/autotest_common.sh@10 -- # set +x 00:19:55.592 16:39:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.592 16:39:32 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:19:55.592 16:39:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.592 16:39:32 -- common/autotest_common.sh@10 -- # set +x 00:19:55.592 16:39:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.592 16:39:32 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 1dccf54e391340128ae0f9267d22685a 00:19:55.592 16:39:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.592 16:39:32 -- common/autotest_common.sh@10 -- # set +x 00:19:55.592 16:39:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.592 16:39:32 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:55.592 16:39:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.592 16:39:32 -- common/autotest_common.sh@10 -- # set +x 00:19:55.592 [2024-11-16 16:39:32.963446] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:55.592 16:39:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.592 16:39:32 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:19:55.592 16:39:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.592 16:39:32 -- common/autotest_common.sh@10 -- # set +x 00:19:55.850 nvme0n1 00:19:55.850 16:39:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.850 16:39:33 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:19:55.850 16:39:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.850 16:39:33 -- common/autotest_common.sh@10 -- # set +x 00:19:55.850 [ 00:19:55.850 { 00:19:55.850 "aliases": [ 00:19:55.850 "1dccf54e-3913-4012-8ae0-f9267d22685a" 00:19:55.850 ], 00:19:55.850 "assigned_rate_limits": { 00:19:55.850 "r_mbytes_per_sec": 0, 00:19:55.850 "rw_ios_per_sec": 0, 00:19:55.850 "rw_mbytes_per_sec": 0, 00:19:55.850 "w_mbytes_per_sec": 0 00:19:55.850 }, 00:19:55.850 "block_size": 512, 00:19:55.850 "claimed": false, 00:19:55.850 "driver_specific": { 00:19:55.850 "mp_policy": "active_passive", 00:19:55.850 "nvme": [ 00:19:55.850 { 00:19:55.850 "ctrlr_data": { 00:19:55.850 "ana_reporting": false, 00:19:55.850 "cntlid": 1, 00:19:55.850 "firmware_revision": "24.01.1", 00:19:55.850 "model_number": "SPDK bdev Controller", 00:19:55.850 "multi_ctrlr": true, 00:19:55.850 "oacs": { 00:19:55.850 "firmware": 0, 00:19:55.850 "format": 0, 00:19:55.850 "ns_manage": 0, 00:19:55.850 "security": 0 00:19:55.850 }, 00:19:55.850 "serial_number": "00000000000000000000", 00:19:55.850 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:55.850 "vendor_id": "0x8086" 00:19:55.850 }, 00:19:55.850 "ns_data": { 00:19:55.850 "can_share": true, 00:19:55.850 "id": 1 00:19:55.850 }, 00:19:55.850 "trid": { 00:19:55.850 "adrfam": "IPv4", 00:19:55.850 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:55.850 "traddr": "10.0.0.2", 00:19:55.850 "trsvcid": "4420", 00:19:55.850 "trtype": "TCP" 00:19:55.850 }, 00:19:55.850 "vs": { 00:19:55.850 "nvme_version": "1.3" 00:19:55.850 } 00:19:55.850 } 00:19:55.850 ] 00:19:55.850 }, 00:19:55.850 "name": "nvme0n1", 00:19:55.850 "num_blocks": 2097152, 00:19:55.850 "product_name": "NVMe disk", 00:19:55.850 "supported_io_types": { 00:19:55.850 "abort": true, 00:19:55.850 "compare": true, 00:19:55.850 "compare_and_write": true, 00:19:55.850 "flush": true, 00:19:55.850 "nvme_admin": true, 00:19:55.850 "nvme_io": true, 00:19:55.851 "read": true, 00:19:55.851 "reset": true, 00:19:55.851 "unmap": false, 00:19:55.851 "write": true, 00:19:55.851 "write_zeroes": true 00:19:55.851 }, 00:19:55.851 "uuid": "1dccf54e-3913-4012-8ae0-f9267d22685a", 00:19:55.851 "zoned": false 00:19:55.851 } 00:19:55.851 ] 00:19:55.851 16:39:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.851 16:39:33 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:19:55.851 16:39:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.851 16:39:33 -- common/autotest_common.sh@10 -- # set +x 00:19:55.851 [2024-11-16 16:39:33.244431] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:55.851 [2024-11-16 16:39:33.244504] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb09a00 (9): Bad file descriptor 00:19:56.109 [2024-11-16 16:39:33.386185] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:56.109 16:39:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.109 16:39:33 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:19:56.109 16:39:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.109 16:39:33 -- common/autotest_common.sh@10 -- # set +x 00:19:56.109 [ 00:19:56.109 { 00:19:56.109 "aliases": [ 00:19:56.109 "1dccf54e-3913-4012-8ae0-f9267d22685a" 00:19:56.109 ], 00:19:56.109 "assigned_rate_limits": { 00:19:56.109 "r_mbytes_per_sec": 0, 00:19:56.109 "rw_ios_per_sec": 0, 00:19:56.109 "rw_mbytes_per_sec": 0, 00:19:56.109 "w_mbytes_per_sec": 0 00:19:56.109 }, 00:19:56.109 "block_size": 512, 00:19:56.109 "claimed": false, 00:19:56.109 "driver_specific": { 00:19:56.109 "mp_policy": "active_passive", 00:19:56.109 "nvme": [ 00:19:56.109 { 00:19:56.109 "ctrlr_data": { 00:19:56.109 "ana_reporting": false, 00:19:56.109 "cntlid": 2, 00:19:56.109 "firmware_revision": "24.01.1", 00:19:56.109 "model_number": "SPDK bdev Controller", 00:19:56.109 "multi_ctrlr": true, 00:19:56.109 "oacs": { 00:19:56.109 "firmware": 0, 00:19:56.109 "format": 0, 00:19:56.109 "ns_manage": 0, 00:19:56.109 "security": 0 00:19:56.109 }, 00:19:56.109 "serial_number": "00000000000000000000", 00:19:56.109 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:56.109 "vendor_id": "0x8086" 00:19:56.109 }, 00:19:56.109 "ns_data": { 00:19:56.109 "can_share": true, 00:19:56.109 "id": 1 00:19:56.109 }, 00:19:56.109 "trid": { 00:19:56.109 "adrfam": "IPv4", 00:19:56.109 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:56.109 "traddr": "10.0.0.2", 00:19:56.109 "trsvcid": "4420", 00:19:56.109 "trtype": "TCP" 00:19:56.109 }, 00:19:56.109 "vs": { 00:19:56.110 "nvme_version": "1.3" 00:19:56.110 } 00:19:56.110 } 00:19:56.110 ] 00:19:56.110 }, 00:19:56.110 "name": "nvme0n1", 00:19:56.110 "num_blocks": 2097152, 00:19:56.110 "product_name": "NVMe disk", 00:19:56.110 "supported_io_types": { 00:19:56.110 "abort": true, 00:19:56.110 "compare": true, 00:19:56.110 "compare_and_write": true, 00:19:56.110 "flush": true, 00:19:56.110 "nvme_admin": true, 00:19:56.110 "nvme_io": true, 00:19:56.110 "read": true, 00:19:56.110 "reset": true, 00:19:56.110 "unmap": false, 00:19:56.110 "write": true, 00:19:56.110 "write_zeroes": true 00:19:56.110 }, 00:19:56.110 "uuid": "1dccf54e-3913-4012-8ae0-f9267d22685a", 00:19:56.110 "zoned": false 00:19:56.110 } 00:19:56.110 ] 00:19:56.110 16:39:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.110 16:39:33 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:56.110 16:39:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.110 16:39:33 -- common/autotest_common.sh@10 -- # set +x 00:19:56.110 16:39:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.110 16:39:33 -- host/async_init.sh@53 -- # mktemp 00:19:56.110 16:39:33 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.TWNdnhW2sm 00:19:56.110 16:39:33 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:56.110 16:39:33 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.TWNdnhW2sm 00:19:56.110 16:39:33 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:19:56.110 16:39:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.110 16:39:33 -- common/autotest_common.sh@10 -- # set +x 00:19:56.110 16:39:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.110 16:39:33 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:19:56.110 16:39:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.110 16:39:33 -- common/autotest_common.sh@10 -- # set +x 00:19:56.110 [2024-11-16 16:39:33.461403] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:56.110 [2024-11-16 16:39:33.461534] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:56.110 16:39:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.110 16:39:33 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.TWNdnhW2sm 00:19:56.110 16:39:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.110 16:39:33 -- common/autotest_common.sh@10 -- # set +x 00:19:56.110 16:39:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.110 16:39:33 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.TWNdnhW2sm 00:19:56.110 16:39:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.110 16:39:33 -- common/autotest_common.sh@10 -- # set +x 00:19:56.110 [2024-11-16 16:39:33.477401] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:56.110 nvme0n1 00:19:56.110 16:39:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.110 16:39:33 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:19:56.110 16:39:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.110 16:39:33 -- common/autotest_common.sh@10 -- # set +x 00:19:56.110 [ 00:19:56.110 { 00:19:56.110 "aliases": [ 00:19:56.110 "1dccf54e-3913-4012-8ae0-f9267d22685a" 00:19:56.110 ], 00:19:56.110 "assigned_rate_limits": { 00:19:56.110 "r_mbytes_per_sec": 0, 00:19:56.110 "rw_ios_per_sec": 0, 00:19:56.110 "rw_mbytes_per_sec": 0, 00:19:56.110 "w_mbytes_per_sec": 0 00:19:56.110 }, 00:19:56.110 "block_size": 512, 00:19:56.110 "claimed": false, 00:19:56.110 "driver_specific": { 00:19:56.110 "mp_policy": "active_passive", 00:19:56.110 "nvme": [ 00:19:56.110 { 00:19:56.110 "ctrlr_data": { 00:19:56.110 "ana_reporting": false, 00:19:56.110 "cntlid": 3, 00:19:56.110 "firmware_revision": "24.01.1", 00:19:56.110 "model_number": "SPDK bdev Controller", 00:19:56.110 "multi_ctrlr": true, 00:19:56.110 "oacs": { 00:19:56.110 "firmware": 0, 00:19:56.110 "format": 0, 00:19:56.110 "ns_manage": 0, 00:19:56.110 "security": 0 00:19:56.110 }, 00:19:56.110 "serial_number": "00000000000000000000", 00:19:56.110 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:56.110 "vendor_id": "0x8086" 00:19:56.110 }, 00:19:56.110 "ns_data": { 00:19:56.110 "can_share": true, 00:19:56.110 "id": 1 00:19:56.110 }, 00:19:56.110 "trid": { 00:19:56.110 "adrfam": "IPv4", 00:19:56.110 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:56.110 "traddr": "10.0.0.2", 00:19:56.110 "trsvcid": "4421", 00:19:56.110 "trtype": "TCP" 00:19:56.110 }, 00:19:56.110 "vs": { 00:19:56.110 "nvme_version": "1.3" 00:19:56.110 } 00:19:56.110 } 00:19:56.110 ] 00:19:56.110 }, 00:19:56.110 "name": "nvme0n1", 00:19:56.110 "num_blocks": 2097152, 00:19:56.110 "product_name": "NVMe disk", 00:19:56.110 "supported_io_types": { 00:19:56.110 "abort": true, 00:19:56.110 "compare": true, 00:19:56.110 "compare_and_write": true, 00:19:56.110 "flush": true, 00:19:56.110 "nvme_admin": true, 00:19:56.110 "nvme_io": true, 00:19:56.110 "read": true, 00:19:56.110 "reset": true, 00:19:56.110 "unmap": false, 00:19:56.110 "write": true, 00:19:56.110 "write_zeroes": true 00:19:56.110 }, 00:19:56.110 "uuid": "1dccf54e-3913-4012-8ae0-f9267d22685a", 00:19:56.110 "zoned": false 00:19:56.110 } 00:19:56.110 ] 00:19:56.110 16:39:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.110 16:39:33 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:56.110 16:39:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.110 16:39:33 -- common/autotest_common.sh@10 -- # set +x 00:19:56.110 16:39:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.110 16:39:33 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.TWNdnhW2sm 00:19:56.110 16:39:33 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:19:56.110 16:39:33 -- host/async_init.sh@78 -- # nvmftestfini 00:19:56.110 16:39:33 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:56.110 16:39:33 -- nvmf/common.sh@116 -- # sync 00:19:56.369 16:39:33 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:56.369 16:39:33 -- nvmf/common.sh@119 -- # set +e 00:19:56.369 16:39:33 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:56.369 16:39:33 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:56.369 rmmod nvme_tcp 00:19:56.369 rmmod nvme_fabrics 00:19:56.369 rmmod nvme_keyring 00:19:56.369 16:39:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:56.369 16:39:33 -- nvmf/common.sh@123 -- # set -e 00:19:56.369 16:39:33 -- nvmf/common.sh@124 -- # return 0 00:19:56.369 16:39:33 -- nvmf/common.sh@477 -- # '[' -n 93389 ']' 00:19:56.369 16:39:33 -- nvmf/common.sh@478 -- # killprocess 93389 00:19:56.369 16:39:33 -- common/autotest_common.sh@936 -- # '[' -z 93389 ']' 00:19:56.369 16:39:33 -- common/autotest_common.sh@940 -- # kill -0 93389 00:19:56.369 16:39:33 -- common/autotest_common.sh@941 -- # uname 00:19:56.369 16:39:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:56.369 16:39:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 93389 00:19:56.369 16:39:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:56.369 killing process with pid 93389 00:19:56.369 16:39:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:56.369 16:39:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 93389' 00:19:56.369 16:39:33 -- common/autotest_common.sh@955 -- # kill 93389 00:19:56.369 16:39:33 -- common/autotest_common.sh@960 -- # wait 93389 00:19:56.627 16:39:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:56.627 16:39:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:56.627 16:39:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:56.627 16:39:33 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:56.627 16:39:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:56.627 16:39:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:56.627 16:39:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:56.627 16:39:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:56.627 16:39:34 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:56.627 00:19:56.627 real 0m2.719s 00:19:56.627 user 0m2.529s 00:19:56.627 sys 0m0.687s 00:19:56.627 16:39:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:56.627 16:39:34 -- common/autotest_common.sh@10 -- # set +x 00:19:56.627 ************************************ 00:19:56.627 END TEST nvmf_async_init 00:19:56.627 ************************************ 00:19:56.627 16:39:34 -- nvmf/nvmf.sh@94 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:19:56.627 16:39:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:56.627 16:39:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:56.627 16:39:34 -- common/autotest_common.sh@10 -- # set +x 00:19:56.627 ************************************ 00:19:56.627 START TEST dma 00:19:56.627 ************************************ 00:19:56.627 16:39:34 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:19:56.886 * Looking for test storage... 00:19:56.886 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:56.886 16:39:34 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:56.886 16:39:34 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:56.886 16:39:34 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:56.886 16:39:34 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:56.886 16:39:34 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:56.886 16:39:34 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:56.886 16:39:34 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:56.886 16:39:34 -- scripts/common.sh@335 -- # IFS=.-: 00:19:56.886 16:39:34 -- scripts/common.sh@335 -- # read -ra ver1 00:19:56.886 16:39:34 -- scripts/common.sh@336 -- # IFS=.-: 00:19:56.886 16:39:34 -- scripts/common.sh@336 -- # read -ra ver2 00:19:56.886 16:39:34 -- scripts/common.sh@337 -- # local 'op=<' 00:19:56.886 16:39:34 -- scripts/common.sh@339 -- # ver1_l=2 00:19:56.886 16:39:34 -- scripts/common.sh@340 -- # ver2_l=1 00:19:56.886 16:39:34 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:56.886 16:39:34 -- scripts/common.sh@343 -- # case "$op" in 00:19:56.886 16:39:34 -- scripts/common.sh@344 -- # : 1 00:19:56.886 16:39:34 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:56.886 16:39:34 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:56.886 16:39:34 -- scripts/common.sh@364 -- # decimal 1 00:19:56.886 16:39:34 -- scripts/common.sh@352 -- # local d=1 00:19:56.886 16:39:34 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:56.886 16:39:34 -- scripts/common.sh@354 -- # echo 1 00:19:56.886 16:39:34 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:56.886 16:39:34 -- scripts/common.sh@365 -- # decimal 2 00:19:56.886 16:39:34 -- scripts/common.sh@352 -- # local d=2 00:19:56.886 16:39:34 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:56.886 16:39:34 -- scripts/common.sh@354 -- # echo 2 00:19:56.886 16:39:34 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:56.886 16:39:34 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:56.886 16:39:34 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:56.886 16:39:34 -- scripts/common.sh@367 -- # return 0 00:19:56.886 16:39:34 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:56.886 16:39:34 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:56.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:56.886 --rc genhtml_branch_coverage=1 00:19:56.886 --rc genhtml_function_coverage=1 00:19:56.886 --rc genhtml_legend=1 00:19:56.886 --rc geninfo_all_blocks=1 00:19:56.886 --rc geninfo_unexecuted_blocks=1 00:19:56.886 00:19:56.886 ' 00:19:56.886 16:39:34 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:56.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:56.886 --rc genhtml_branch_coverage=1 00:19:56.886 --rc genhtml_function_coverage=1 00:19:56.886 --rc genhtml_legend=1 00:19:56.886 --rc geninfo_all_blocks=1 00:19:56.886 --rc geninfo_unexecuted_blocks=1 00:19:56.886 00:19:56.886 ' 00:19:56.886 16:39:34 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:56.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:56.886 --rc genhtml_branch_coverage=1 00:19:56.886 --rc genhtml_function_coverage=1 00:19:56.886 --rc genhtml_legend=1 00:19:56.886 --rc geninfo_all_blocks=1 00:19:56.886 --rc geninfo_unexecuted_blocks=1 00:19:56.886 00:19:56.886 ' 00:19:56.886 16:39:34 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:56.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:56.886 --rc genhtml_branch_coverage=1 00:19:56.886 --rc genhtml_function_coverage=1 00:19:56.886 --rc genhtml_legend=1 00:19:56.886 --rc geninfo_all_blocks=1 00:19:56.886 --rc geninfo_unexecuted_blocks=1 00:19:56.886 00:19:56.886 ' 00:19:56.886 16:39:34 -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:56.886 16:39:34 -- nvmf/common.sh@7 -- # uname -s 00:19:56.886 16:39:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:56.886 16:39:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:56.886 16:39:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:56.886 16:39:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:56.886 16:39:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:56.886 16:39:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:56.886 16:39:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:56.886 16:39:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:56.886 16:39:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:56.886 16:39:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:56.886 16:39:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:19:56.886 16:39:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:19:56.886 16:39:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:56.886 16:39:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:56.886 16:39:34 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:56.886 16:39:34 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:56.886 16:39:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:56.886 16:39:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:56.886 16:39:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:56.886 16:39:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.886 16:39:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.886 16:39:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.886 16:39:34 -- paths/export.sh@5 -- # export PATH 00:19:56.886 16:39:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.886 16:39:34 -- nvmf/common.sh@46 -- # : 0 00:19:56.886 16:39:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:56.886 16:39:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:56.886 16:39:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:56.886 16:39:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:56.886 16:39:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:56.886 16:39:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:56.886 16:39:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:56.886 16:39:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:56.886 16:39:34 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:19:56.886 16:39:34 -- host/dma.sh@13 -- # exit 0 00:19:56.886 00:19:56.886 real 0m0.213s 00:19:56.886 user 0m0.135s 00:19:56.886 sys 0m0.087s 00:19:56.886 16:39:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:56.886 16:39:34 -- common/autotest_common.sh@10 -- # set +x 00:19:56.886 ************************************ 00:19:56.886 END TEST dma 00:19:56.886 ************************************ 00:19:56.886 16:39:34 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:19:56.887 16:39:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:56.887 16:39:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:56.887 16:39:34 -- common/autotest_common.sh@10 -- # set +x 00:19:56.887 ************************************ 00:19:56.887 START TEST nvmf_identify 00:19:56.887 ************************************ 00:19:56.887 16:39:34 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:19:57.146 * Looking for test storage... 00:19:57.146 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:57.146 16:39:34 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:57.146 16:39:34 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:57.146 16:39:34 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:57.146 16:39:34 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:57.146 16:39:34 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:57.146 16:39:34 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:57.146 16:39:34 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:57.146 16:39:34 -- scripts/common.sh@335 -- # IFS=.-: 00:19:57.146 16:39:34 -- scripts/common.sh@335 -- # read -ra ver1 00:19:57.146 16:39:34 -- scripts/common.sh@336 -- # IFS=.-: 00:19:57.146 16:39:34 -- scripts/common.sh@336 -- # read -ra ver2 00:19:57.146 16:39:34 -- scripts/common.sh@337 -- # local 'op=<' 00:19:57.146 16:39:34 -- scripts/common.sh@339 -- # ver1_l=2 00:19:57.146 16:39:34 -- scripts/common.sh@340 -- # ver2_l=1 00:19:57.146 16:39:34 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:57.146 16:39:34 -- scripts/common.sh@343 -- # case "$op" in 00:19:57.146 16:39:34 -- scripts/common.sh@344 -- # : 1 00:19:57.146 16:39:34 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:57.146 16:39:34 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:57.146 16:39:34 -- scripts/common.sh@364 -- # decimal 1 00:19:57.146 16:39:34 -- scripts/common.sh@352 -- # local d=1 00:19:57.146 16:39:34 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:57.146 16:39:34 -- scripts/common.sh@354 -- # echo 1 00:19:57.146 16:39:34 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:57.146 16:39:34 -- scripts/common.sh@365 -- # decimal 2 00:19:57.146 16:39:34 -- scripts/common.sh@352 -- # local d=2 00:19:57.146 16:39:34 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:57.146 16:39:34 -- scripts/common.sh@354 -- # echo 2 00:19:57.146 16:39:34 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:57.146 16:39:34 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:57.146 16:39:34 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:57.146 16:39:34 -- scripts/common.sh@367 -- # return 0 00:19:57.146 16:39:34 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:57.146 16:39:34 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:57.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:57.146 --rc genhtml_branch_coverage=1 00:19:57.146 --rc genhtml_function_coverage=1 00:19:57.146 --rc genhtml_legend=1 00:19:57.146 --rc geninfo_all_blocks=1 00:19:57.146 --rc geninfo_unexecuted_blocks=1 00:19:57.146 00:19:57.146 ' 00:19:57.146 16:39:34 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:57.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:57.146 --rc genhtml_branch_coverage=1 00:19:57.146 --rc genhtml_function_coverage=1 00:19:57.146 --rc genhtml_legend=1 00:19:57.146 --rc geninfo_all_blocks=1 00:19:57.146 --rc geninfo_unexecuted_blocks=1 00:19:57.146 00:19:57.146 ' 00:19:57.146 16:39:34 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:57.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:57.146 --rc genhtml_branch_coverage=1 00:19:57.146 --rc genhtml_function_coverage=1 00:19:57.146 --rc genhtml_legend=1 00:19:57.146 --rc geninfo_all_blocks=1 00:19:57.146 --rc geninfo_unexecuted_blocks=1 00:19:57.146 00:19:57.146 ' 00:19:57.146 16:39:34 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:57.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:57.146 --rc genhtml_branch_coverage=1 00:19:57.146 --rc genhtml_function_coverage=1 00:19:57.146 --rc genhtml_legend=1 00:19:57.146 --rc geninfo_all_blocks=1 00:19:57.146 --rc geninfo_unexecuted_blocks=1 00:19:57.146 00:19:57.146 ' 00:19:57.146 16:39:34 -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:57.146 16:39:34 -- nvmf/common.sh@7 -- # uname -s 00:19:57.146 16:39:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:57.146 16:39:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:57.146 16:39:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:57.146 16:39:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:57.146 16:39:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:57.146 16:39:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:57.146 16:39:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:57.146 16:39:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:57.146 16:39:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:57.146 16:39:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:57.146 16:39:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:19:57.146 16:39:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:19:57.146 16:39:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:57.146 16:39:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:57.146 16:39:34 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:57.146 16:39:34 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:57.146 16:39:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:57.146 16:39:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:57.146 16:39:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:57.146 16:39:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.146 16:39:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.146 16:39:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.146 16:39:34 -- paths/export.sh@5 -- # export PATH 00:19:57.146 16:39:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.146 16:39:34 -- nvmf/common.sh@46 -- # : 0 00:19:57.146 16:39:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:57.146 16:39:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:57.146 16:39:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:57.146 16:39:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:57.147 16:39:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:57.147 16:39:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:57.147 16:39:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:57.147 16:39:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:57.147 16:39:34 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:57.147 16:39:34 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:57.147 16:39:34 -- host/identify.sh@14 -- # nvmftestinit 00:19:57.147 16:39:34 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:57.147 16:39:34 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:57.147 16:39:34 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:57.147 16:39:34 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:57.147 16:39:34 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:57.147 16:39:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:57.147 16:39:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:57.147 16:39:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:57.147 16:39:34 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:57.147 16:39:34 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:57.147 16:39:34 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:57.147 16:39:34 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:57.147 16:39:34 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:57.147 16:39:34 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:57.147 16:39:34 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:57.147 16:39:34 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:57.147 16:39:34 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:57.147 16:39:34 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:57.147 16:39:34 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:57.147 16:39:34 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:57.147 16:39:34 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:57.147 16:39:34 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:57.147 16:39:34 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:57.147 16:39:34 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:57.147 16:39:34 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:57.147 16:39:34 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:57.147 16:39:34 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:57.147 16:39:34 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:57.147 Cannot find device "nvmf_tgt_br" 00:19:57.147 16:39:34 -- nvmf/common.sh@154 -- # true 00:19:57.147 16:39:34 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:57.147 Cannot find device "nvmf_tgt_br2" 00:19:57.147 16:39:34 -- nvmf/common.sh@155 -- # true 00:19:57.147 16:39:34 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:57.147 16:39:34 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:57.147 Cannot find device "nvmf_tgt_br" 00:19:57.147 16:39:34 -- nvmf/common.sh@157 -- # true 00:19:57.147 16:39:34 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:57.147 Cannot find device "nvmf_tgt_br2" 00:19:57.147 16:39:34 -- nvmf/common.sh@158 -- # true 00:19:57.147 16:39:34 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:57.147 16:39:34 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:57.405 16:39:34 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:57.405 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:57.405 16:39:34 -- nvmf/common.sh@161 -- # true 00:19:57.405 16:39:34 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:57.405 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:57.405 16:39:34 -- nvmf/common.sh@162 -- # true 00:19:57.405 16:39:34 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:57.405 16:39:34 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:57.405 16:39:34 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:57.405 16:39:34 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:57.405 16:39:34 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:57.405 16:39:34 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:57.405 16:39:34 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:57.405 16:39:34 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:57.405 16:39:34 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:57.405 16:39:34 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:57.405 16:39:34 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:57.405 16:39:34 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:57.405 16:39:34 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:57.405 16:39:34 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:57.405 16:39:34 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:57.405 16:39:34 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:57.405 16:39:34 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:57.405 16:39:34 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:57.405 16:39:34 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:57.405 16:39:34 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:57.405 16:39:34 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:57.405 16:39:34 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:57.405 16:39:34 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:57.405 16:39:34 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:57.405 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:57.405 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:19:57.405 00:19:57.405 --- 10.0.0.2 ping statistics --- 00:19:57.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:57.405 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:19:57.405 16:39:34 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:57.405 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:57.405 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:19:57.405 00:19:57.406 --- 10.0.0.3 ping statistics --- 00:19:57.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:57.406 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:19:57.406 16:39:34 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:57.406 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:57.406 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:19:57.406 00:19:57.406 --- 10.0.0.1 ping statistics --- 00:19:57.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:57.406 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:19:57.406 16:39:34 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:57.406 16:39:34 -- nvmf/common.sh@421 -- # return 0 00:19:57.406 16:39:34 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:57.406 16:39:34 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:57.406 16:39:34 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:57.406 16:39:34 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:57.406 16:39:34 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:57.406 16:39:34 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:57.406 16:39:34 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:57.406 16:39:34 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:19:57.406 16:39:34 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:57.406 16:39:34 -- common/autotest_common.sh@10 -- # set +x 00:19:57.665 16:39:34 -- host/identify.sh@19 -- # nvmfpid=93676 00:19:57.665 16:39:34 -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:57.665 16:39:34 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:57.665 16:39:34 -- host/identify.sh@23 -- # waitforlisten 93676 00:19:57.665 16:39:34 -- common/autotest_common.sh@829 -- # '[' -z 93676 ']' 00:19:57.665 16:39:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:57.665 16:39:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:57.665 16:39:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:57.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:57.665 16:39:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:57.665 16:39:34 -- common/autotest_common.sh@10 -- # set +x 00:19:57.665 [2024-11-16 16:39:34.954573] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:57.665 [2024-11-16 16:39:34.954665] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:57.665 [2024-11-16 16:39:35.092730] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:57.924 [2024-11-16 16:39:35.168167] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:57.924 [2024-11-16 16:39:35.168319] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:57.924 [2024-11-16 16:39:35.168331] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:57.924 [2024-11-16 16:39:35.168339] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:57.924 [2024-11-16 16:39:35.168689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:57.924 [2024-11-16 16:39:35.169006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:57.924 [2024-11-16 16:39:35.169171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:57.924 [2024-11-16 16:39:35.169175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:58.860 16:39:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:58.860 16:39:35 -- common/autotest_common.sh@862 -- # return 0 00:19:58.860 16:39:35 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:58.860 16:39:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.860 16:39:35 -- common/autotest_common.sh@10 -- # set +x 00:19:58.860 [2024-11-16 16:39:35.991871] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:58.860 16:39:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.860 16:39:36 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:19:58.860 16:39:36 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:58.860 16:39:36 -- common/autotest_common.sh@10 -- # set +x 00:19:58.860 16:39:36 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:58.860 16:39:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.860 16:39:36 -- common/autotest_common.sh@10 -- # set +x 00:19:58.860 Malloc0 00:19:58.860 16:39:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.860 16:39:36 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:58.860 16:39:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.861 16:39:36 -- common/autotest_common.sh@10 -- # set +x 00:19:58.861 16:39:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.861 16:39:36 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:19:58.861 16:39:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.861 16:39:36 -- common/autotest_common.sh@10 -- # set +x 00:19:58.861 16:39:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.861 16:39:36 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:58.861 16:39:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.861 16:39:36 -- common/autotest_common.sh@10 -- # set +x 00:19:58.861 [2024-11-16 16:39:36.114311] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:58.861 16:39:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.861 16:39:36 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:58.861 16:39:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.861 16:39:36 -- common/autotest_common.sh@10 -- # set +x 00:19:58.861 16:39:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.861 16:39:36 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:19:58.861 16:39:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.861 16:39:36 -- common/autotest_common.sh@10 -- # set +x 00:19:58.861 [2024-11-16 16:39:36.130048] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:19:58.861 [ 00:19:58.861 { 00:19:58.861 "allow_any_host": true, 00:19:58.861 "hosts": [], 00:19:58.861 "listen_addresses": [ 00:19:58.861 { 00:19:58.861 "adrfam": "IPv4", 00:19:58.861 "traddr": "10.0.0.2", 00:19:58.861 "transport": "TCP", 00:19:58.861 "trsvcid": "4420", 00:19:58.861 "trtype": "TCP" 00:19:58.861 } 00:19:58.861 ], 00:19:58.861 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:58.861 "subtype": "Discovery" 00:19:58.861 }, 00:19:58.861 { 00:19:58.861 "allow_any_host": true, 00:19:58.861 "hosts": [], 00:19:58.861 "listen_addresses": [ 00:19:58.861 { 00:19:58.861 "adrfam": "IPv4", 00:19:58.861 "traddr": "10.0.0.2", 00:19:58.861 "transport": "TCP", 00:19:58.861 "trsvcid": "4420", 00:19:58.861 "trtype": "TCP" 00:19:58.861 } 00:19:58.861 ], 00:19:58.861 "max_cntlid": 65519, 00:19:58.861 "max_namespaces": 32, 00:19:58.861 "min_cntlid": 1, 00:19:58.861 "model_number": "SPDK bdev Controller", 00:19:58.861 "namespaces": [ 00:19:58.861 { 00:19:58.861 "bdev_name": "Malloc0", 00:19:58.861 "eui64": "ABCDEF0123456789", 00:19:58.861 "name": "Malloc0", 00:19:58.861 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:19:58.861 "nsid": 1, 00:19:58.861 "uuid": "7b0210c9-6c3c-48a8-b51a-fc5a0dc17a29" 00:19:58.861 } 00:19:58.861 ], 00:19:58.861 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:58.861 "serial_number": "SPDK00000000000001", 00:19:58.861 "subtype": "NVMe" 00:19:58.861 } 00:19:58.861 ] 00:19:58.861 16:39:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.861 16:39:36 -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:19:58.861 [2024-11-16 16:39:36.165805] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:58.861 [2024-11-16 16:39:36.165865] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93735 ] 00:19:58.861 [2024-11-16 16:39:36.304271] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:19:58.861 [2024-11-16 16:39:36.304352] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:19:58.861 [2024-11-16 16:39:36.304359] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:19:58.861 [2024-11-16 16:39:36.304368] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:19:58.861 [2024-11-16 16:39:36.304378] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:19:58.861 [2024-11-16 16:39:36.304524] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:19:58.861 [2024-11-16 16:39:36.304617] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xef2510 0 00:19:58.861 [2024-11-16 16:39:36.310073] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:19:58.861 [2024-11-16 16:39:36.310114] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:19:58.861 [2024-11-16 16:39:36.310120] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:19:58.861 [2024-11-16 16:39:36.310124] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:19:58.861 [2024-11-16 16:39:36.310172] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.861 [2024-11-16 16:39:36.310180] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.861 [2024-11-16 16:39:36.310184] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xef2510) 00:19:58.861 [2024-11-16 16:39:36.310199] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:19:58.861 [2024-11-16 16:39:36.310231] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf3e8a0, cid 0, qid 0 00:19:58.861 [2024-11-16 16:39:36.318106] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.861 [2024-11-16 16:39:36.318126] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.861 [2024-11-16 16:39:36.318147] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.861 [2024-11-16 16:39:36.318152] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf3e8a0) on tqpair=0xef2510 00:19:58.861 [2024-11-16 16:39:36.318163] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:19:58.861 [2024-11-16 16:39:36.318169] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:19:58.861 [2024-11-16 16:39:36.318175] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:19:58.861 [2024-11-16 16:39:36.318191] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.861 [2024-11-16 16:39:36.318196] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.861 [2024-11-16 16:39:36.318199] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xef2510) 00:19:58.861 [2024-11-16 16:39:36.318208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.861 [2024-11-16 16:39:36.318236] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf3e8a0, cid 0, qid 0 00:19:58.861 [2024-11-16 16:39:36.318311] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.861 [2024-11-16 16:39:36.318318] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.861 [2024-11-16 16:39:36.318321] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.861 [2024-11-16 16:39:36.318325] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf3e8a0) on tqpair=0xef2510 00:19:58.861 [2024-11-16 16:39:36.318330] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:19:58.861 [2024-11-16 16:39:36.318337] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:19:58.861 [2024-11-16 16:39:36.318344] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.861 [2024-11-16 16:39:36.318348] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.861 [2024-11-16 16:39:36.318351] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xef2510) 00:19:58.861 [2024-11-16 16:39:36.318358] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.861 [2024-11-16 16:39:36.318394] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf3e8a0, cid 0, qid 0 00:19:58.861 [2024-11-16 16:39:36.318462] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.861 [2024-11-16 16:39:36.318469] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.861 [2024-11-16 16:39:36.318472] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.861 [2024-11-16 16:39:36.318476] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf3e8a0) on tqpair=0xef2510 00:19:58.861 [2024-11-16 16:39:36.318482] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:19:58.861 [2024-11-16 16:39:36.318490] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:19:58.861 [2024-11-16 16:39:36.318497] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.861 [2024-11-16 16:39:36.318501] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.861 [2024-11-16 16:39:36.318504] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xef2510) 00:19:58.861 [2024-11-16 16:39:36.318511] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.861 [2024-11-16 16:39:36.318530] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf3e8a0, cid 0, qid 0 00:19:58.861 [2024-11-16 16:39:36.318585] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.861 [2024-11-16 16:39:36.318592] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.861 [2024-11-16 16:39:36.318595] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.861 [2024-11-16 16:39:36.318599] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf3e8a0) on tqpair=0xef2510 00:19:58.861 [2024-11-16 16:39:36.318605] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:58.861 [2024-11-16 16:39:36.318614] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.861 [2024-11-16 16:39:36.318618] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.861 [2024-11-16 16:39:36.318622] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xef2510) 00:19:58.861 [2024-11-16 16:39:36.318629] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.861 [2024-11-16 16:39:36.318647] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf3e8a0, cid 0, qid 0 00:19:58.861 [2024-11-16 16:39:36.318705] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.861 [2024-11-16 16:39:36.318712] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.861 [2024-11-16 16:39:36.318715] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.862 [2024-11-16 16:39:36.318719] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf3e8a0) on tqpair=0xef2510 00:19:58.862 [2024-11-16 16:39:36.318724] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:19:58.862 [2024-11-16 16:39:36.318729] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:19:58.862 [2024-11-16 16:39:36.318736] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:58.862 [2024-11-16 16:39:36.318842] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:19:58.862 [2024-11-16 16:39:36.318847] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:58.862 [2024-11-16 16:39:36.318856] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.862 [2024-11-16 16:39:36.318860] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.862 [2024-11-16 16:39:36.318864] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xef2510) 00:19:58.862 [2024-11-16 16:39:36.318871] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.862 [2024-11-16 16:39:36.318890] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf3e8a0, cid 0, qid 0 00:19:58.862 [2024-11-16 16:39:36.318949] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.862 [2024-11-16 16:39:36.318956] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.862 [2024-11-16 16:39:36.318959] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.862 [2024-11-16 16:39:36.318963] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf3e8a0) on tqpair=0xef2510 00:19:58.862 [2024-11-16 16:39:36.318968] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:58.862 [2024-11-16 16:39:36.318978] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.862 [2024-11-16 16:39:36.318982] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.862 [2024-11-16 16:39:36.318985] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xef2510) 00:19:58.862 [2024-11-16 16:39:36.318992] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.862 [2024-11-16 16:39:36.319010] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf3e8a0, cid 0, qid 0 00:19:58.862 [2024-11-16 16:39:36.319076] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.862 [2024-11-16 16:39:36.319082] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.862 [2024-11-16 16:39:36.319099] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.862 [2024-11-16 16:39:36.319104] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf3e8a0) on tqpair=0xef2510 00:19:58.862 [2024-11-16 16:39:36.319109] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:58.862 [2024-11-16 16:39:36.319114] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:19:58.862 [2024-11-16 16:39:36.319123] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:19:58.862 [2024-11-16 16:39:36.319138] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:19:58.862 [2024-11-16 16:39:36.319148] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.862 [2024-11-16 16:39:36.319152] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.862 [2024-11-16 16:39:36.319155] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xef2510) 00:19:58.862 [2024-11-16 16:39:36.319162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.862 [2024-11-16 16:39:36.319184] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf3e8a0, cid 0, qid 0 00:19:58.862 [2024-11-16 16:39:36.319282] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:58.862 [2024-11-16 16:39:36.319289] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:58.862 [2024-11-16 16:39:36.319293] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:58.862 [2024-11-16 16:39:36.319297] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xef2510): datao=0, datal=4096, cccid=0 00:19:58.862 [2024-11-16 16:39:36.319301] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf3e8a0) on tqpair(0xef2510): expected_datao=0, payload_size=4096 00:19:58.862 [2024-11-16 16:39:36.319310] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:58.862 [2024-11-16 16:39:36.319315] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:58.862 [2024-11-16 16:39:36.319324] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.862 [2024-11-16 16:39:36.319329] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.862 [2024-11-16 16:39:36.319332] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.862 [2024-11-16 16:39:36.319336] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf3e8a0) on tqpair=0xef2510 00:19:58.862 [2024-11-16 16:39:36.319344] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:19:58.862 [2024-11-16 16:39:36.319350] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:19:58.862 [2024-11-16 16:39:36.319354] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:19:58.862 [2024-11-16 16:39:36.319360] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:19:58.862 [2024-11-16 16:39:36.319365] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:19:58.862 [2024-11-16 16:39:36.319370] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:19:58.862 [2024-11-16 16:39:36.319383] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:19:58.862 [2024-11-16 16:39:36.319391] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.862 [2024-11-16 16:39:36.319395] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.862 [2024-11-16 16:39:36.319399] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xef2510) 00:19:58.862 [2024-11-16 16:39:36.319406] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:58.862 [2024-11-16 16:39:36.319427] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf3e8a0, cid 0, qid 0 00:19:58.862 [2024-11-16 16:39:36.319486] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.862 [2024-11-16 16:39:36.319492] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.862 [2024-11-16 16:39:36.319495] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.862 [2024-11-16 16:39:36.319499] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf3e8a0) on tqpair=0xef2510 00:19:58.862 [2024-11-16 16:39:36.319507] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.862 [2024-11-16 16:39:36.319511] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.862 [2024-11-16 16:39:36.319515] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xef2510) 00:19:58.862 [2024-11-16 16:39:36.319521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.862 [2024-11-16 16:39:36.319527] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.862 [2024-11-16 16:39:36.319531] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.862 [2024-11-16 16:39:36.319534] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xef2510) 00:19:58.862 [2024-11-16 16:39:36.319540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.862 [2024-11-16 16:39:36.319546] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.862 [2024-11-16 16:39:36.319549] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.862 [2024-11-16 16:39:36.319553] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xef2510) 00:19:58.862 [2024-11-16 16:39:36.319558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.862 [2024-11-16 16:39:36.319565] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.862 [2024-11-16 16:39:36.319569] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.862 [2024-11-16 16:39:36.319572] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xef2510) 00:19:58.862 [2024-11-16 16:39:36.319578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.862 [2024-11-16 16:39:36.319583] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:19:58.862 [2024-11-16 16:39:36.319596] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:58.862 [2024-11-16 16:39:36.319603] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.862 [2024-11-16 16:39:36.319607] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.862 [2024-11-16 16:39:36.319610] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xef2510) 00:19:58.862 [2024-11-16 16:39:36.319617] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.862 [2024-11-16 16:39:36.319642] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf3e8a0, cid 0, qid 0 00:19:58.862 [2024-11-16 16:39:36.319649] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf3ea00, cid 1, qid 0 00:19:58.862 [2024-11-16 16:39:36.319654] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf3eb60, cid 2, qid 0 00:19:58.862 [2024-11-16 16:39:36.319658] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf3ecc0, cid 3, qid 0 00:19:58.862 [2024-11-16 16:39:36.319663] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf3ee20, cid 4, qid 0 00:19:58.862 [2024-11-16 16:39:36.319753] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.862 [2024-11-16 16:39:36.319759] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.862 [2024-11-16 16:39:36.319763] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.862 [2024-11-16 16:39:36.319766] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf3ee20) on tqpair=0xef2510 00:19:58.862 [2024-11-16 16:39:36.319772] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:19:58.862 [2024-11-16 16:39:36.319777] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:19:58.862 [2024-11-16 16:39:36.319787] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.862 [2024-11-16 16:39:36.319792] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.862 [2024-11-16 16:39:36.319795] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xef2510) 00:19:58.863 [2024-11-16 16:39:36.319802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.863 [2024-11-16 16:39:36.319834] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf3ee20, cid 4, qid 0 00:19:58.863 [2024-11-16 16:39:36.319914] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:58.863 [2024-11-16 16:39:36.319921] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:58.863 [2024-11-16 16:39:36.319924] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:58.863 [2024-11-16 16:39:36.319928] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xef2510): datao=0, datal=4096, cccid=4 00:19:58.863 [2024-11-16 16:39:36.319932] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf3ee20) on tqpair(0xef2510): expected_datao=0, payload_size=4096 00:19:58.863 [2024-11-16 16:39:36.319940] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:58.863 [2024-11-16 16:39:36.319943] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:58.863 [2024-11-16 16:39:36.319952] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.863 [2024-11-16 16:39:36.319957] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.863 [2024-11-16 16:39:36.319960] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.863 [2024-11-16 16:39:36.319964] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf3ee20) on tqpair=0xef2510 00:19:58.863 [2024-11-16 16:39:36.319977] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:19:58.863 [2024-11-16 16:39:36.320018] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.863 [2024-11-16 16:39:36.320024] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.863 [2024-11-16 16:39:36.320028] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xef2510) 00:19:58.863 [2024-11-16 16:39:36.320035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.863 [2024-11-16 16:39:36.320042] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:58.863 [2024-11-16 16:39:36.320046] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:58.863 [2024-11-16 16:39:36.320050] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xef2510) 00:19:58.863 [2024-11-16 16:39:36.320079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.863 [2024-11-16 16:39:36.320109] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf3ee20, cid 4, qid 0 00:19:58.863 [2024-11-16 16:39:36.320117] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf3ef80, cid 5, qid 0 00:19:58.863 [2024-11-16 16:39:36.320224] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:58.863 [2024-11-16 16:39:36.320231] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:58.863 [2024-11-16 16:39:36.320234] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:58.863 [2024-11-16 16:39:36.320238] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xef2510): datao=0, datal=1024, cccid=4 00:19:58.863 [2024-11-16 16:39:36.320242] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf3ee20) on tqpair(0xef2510): expected_datao=0, payload_size=1024 00:19:58.863 [2024-11-16 16:39:36.320249] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:58.863 [2024-11-16 16:39:36.320253] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:58.863 [2024-11-16 16:39:36.320258] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:58.863 [2024-11-16 16:39:36.320263] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:58.863 [2024-11-16 16:39:36.320266] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:58.863 [2024-11-16 16:39:36.320270] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf3ef80) on tqpair=0xef2510 00:19:59.125 [2024-11-16 16:39:36.365071] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:59.125 [2024-11-16 16:39:36.365095] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:59.125 [2024-11-16 16:39:36.365115] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:59.125 [2024-11-16 16:39:36.365119] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf3ee20) on tqpair=0xef2510 00:19:59.125 [2024-11-16 16:39:36.365134] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:59.125 [2024-11-16 16:39:36.365138] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:59.125 [2024-11-16 16:39:36.365141] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xef2510) 00:19:59.125 [2024-11-16 16:39:36.365149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.125 [2024-11-16 16:39:36.365182] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf3ee20, cid 4, qid 0 00:19:59.125 [2024-11-16 16:39:36.365288] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:59.125 [2024-11-16 16:39:36.365295] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:59.125 [2024-11-16 16:39:36.365298] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:59.125 [2024-11-16 16:39:36.365301] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xef2510): datao=0, datal=3072, cccid=4 00:19:59.125 [2024-11-16 16:39:36.365306] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf3ee20) on tqpair(0xef2510): expected_datao=0, payload_size=3072 00:19:59.125 [2024-11-16 16:39:36.365313] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:59.125 [2024-11-16 16:39:36.365317] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:59.125 [2024-11-16 16:39:36.365324] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:59.125 [2024-11-16 16:39:36.365330] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:59.125 [2024-11-16 16:39:36.365333] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:59.125 [2024-11-16 16:39:36.365336] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf3ee20) on tqpair=0xef2510 00:19:59.125 [2024-11-16 16:39:36.365346] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:59.125 [2024-11-16 16:39:36.365350] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:59.125 [2024-11-16 16:39:36.365369] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xef2510) 00:19:59.125 [2024-11-16 16:39:36.365376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.125 [2024-11-16 16:39:36.365404] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf3ee20, cid 4, qid 0 00:19:59.125 [2024-11-16 16:39:36.365485] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:59.125 [2024-11-16 16:39:36.365491] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:59.125 [2024-11-16 16:39:36.365495] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:59.125 [2024-11-16 16:39:36.365498] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xef2510): datao=0, datal=8, cccid=4 00:19:59.125 [2024-11-16 16:39:36.365502] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf3ee20) on tqpair(0xef2510): expected_datao=0, payload_size=8 00:19:59.125 [2024-11-16 16:39:36.365509] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:59.125 [2024-11-16 16:39:36.365512] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:59.125 [2024-11-16 16:39:36.406172] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:59.125 [2024-11-16 16:39:36.406210] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:59.125 [2024-11-16 16:39:36.406215] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:59.125 [2024-11-16 16:39:36.406219] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf3ee20) on tqpair=0xef2510 00:19:59.125 ===================================================== 00:19:59.125 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:19:59.125 ===================================================== 00:19:59.125 Controller Capabilities/Features 00:19:59.125 ================================ 00:19:59.125 Vendor ID: 0000 00:19:59.125 Subsystem Vendor ID: 0000 00:19:59.125 Serial Number: .................... 00:19:59.125 Model Number: ........................................ 00:19:59.125 Firmware Version: 24.01.1 00:19:59.125 Recommended Arb Burst: 0 00:19:59.126 IEEE OUI Identifier: 00 00 00 00:19:59.126 Multi-path I/O 00:19:59.126 May have multiple subsystem ports: No 00:19:59.126 May have multiple controllers: No 00:19:59.126 Associated with SR-IOV VF: No 00:19:59.126 Max Data Transfer Size: 131072 00:19:59.126 Max Number of Namespaces: 0 00:19:59.126 Max Number of I/O Queues: 1024 00:19:59.126 NVMe Specification Version (VS): 1.3 00:19:59.126 NVMe Specification Version (Identify): 1.3 00:19:59.126 Maximum Queue Entries: 128 00:19:59.126 Contiguous Queues Required: Yes 00:19:59.126 Arbitration Mechanisms Supported 00:19:59.126 Weighted Round Robin: Not Supported 00:19:59.126 Vendor Specific: Not Supported 00:19:59.126 Reset Timeout: 15000 ms 00:19:59.126 Doorbell Stride: 4 bytes 00:19:59.126 NVM Subsystem Reset: Not Supported 00:19:59.126 Command Sets Supported 00:19:59.126 NVM Command Set: Supported 00:19:59.126 Boot Partition: Not Supported 00:19:59.126 Memory Page Size Minimum: 4096 bytes 00:19:59.126 Memory Page Size Maximum: 4096 bytes 00:19:59.126 Persistent Memory Region: Not Supported 00:19:59.126 Optional Asynchronous Events Supported 00:19:59.126 Namespace Attribute Notices: Not Supported 00:19:59.126 Firmware Activation Notices: Not Supported 00:19:59.126 ANA Change Notices: Not Supported 00:19:59.126 PLE Aggregate Log Change Notices: Not Supported 00:19:59.126 LBA Status Info Alert Notices: Not Supported 00:19:59.126 EGE Aggregate Log Change Notices: Not Supported 00:19:59.126 Normal NVM Subsystem Shutdown event: Not Supported 00:19:59.126 Zone Descriptor Change Notices: Not Supported 00:19:59.126 Discovery Log Change Notices: Supported 00:19:59.126 Controller Attributes 00:19:59.126 128-bit Host Identifier: Not Supported 00:19:59.126 Non-Operational Permissive Mode: Not Supported 00:19:59.126 NVM Sets: Not Supported 00:19:59.126 Read Recovery Levels: Not Supported 00:19:59.126 Endurance Groups: Not Supported 00:19:59.126 Predictable Latency Mode: Not Supported 00:19:59.126 Traffic Based Keep ALive: Not Supported 00:19:59.126 Namespace Granularity: Not Supported 00:19:59.126 SQ Associations: Not Supported 00:19:59.126 UUID List: Not Supported 00:19:59.126 Multi-Domain Subsystem: Not Supported 00:19:59.126 Fixed Capacity Management: Not Supported 00:19:59.126 Variable Capacity Management: Not Supported 00:19:59.126 Delete Endurance Group: Not Supported 00:19:59.126 Delete NVM Set: Not Supported 00:19:59.126 Extended LBA Formats Supported: Not Supported 00:19:59.126 Flexible Data Placement Supported: Not Supported 00:19:59.126 00:19:59.126 Controller Memory Buffer Support 00:19:59.126 ================================ 00:19:59.126 Supported: No 00:19:59.126 00:19:59.126 Persistent Memory Region Support 00:19:59.126 ================================ 00:19:59.126 Supported: No 00:19:59.126 00:19:59.126 Admin Command Set Attributes 00:19:59.126 ============================ 00:19:59.126 Security Send/Receive: Not Supported 00:19:59.126 Format NVM: Not Supported 00:19:59.126 Firmware Activate/Download: Not Supported 00:19:59.126 Namespace Management: Not Supported 00:19:59.126 Device Self-Test: Not Supported 00:19:59.126 Directives: Not Supported 00:19:59.126 NVMe-MI: Not Supported 00:19:59.126 Virtualization Management: Not Supported 00:19:59.126 Doorbell Buffer Config: Not Supported 00:19:59.126 Get LBA Status Capability: Not Supported 00:19:59.126 Command & Feature Lockdown Capability: Not Supported 00:19:59.126 Abort Command Limit: 1 00:19:59.126 Async Event Request Limit: 4 00:19:59.126 Number of Firmware Slots: N/A 00:19:59.126 Firmware Slot 1 Read-Only: N/A 00:19:59.126 Firmware Activation Without Reset: N/A 00:19:59.126 Multiple Update Detection Support: N/A 00:19:59.126 Firmware Update Granularity: No Information Provided 00:19:59.126 Per-Namespace SMART Log: No 00:19:59.126 Asymmetric Namespace Access Log Page: Not Supported 00:19:59.126 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:19:59.126 Command Effects Log Page: Not Supported 00:19:59.126 Get Log Page Extended Data: Supported 00:19:59.126 Telemetry Log Pages: Not Supported 00:19:59.126 Persistent Event Log Pages: Not Supported 00:19:59.126 Supported Log Pages Log Page: May Support 00:19:59.126 Commands Supported & Effects Log Page: Not Supported 00:19:59.126 Feature Identifiers & Effects Log Page:May Support 00:19:59.126 NVMe-MI Commands & Effects Log Page: May Support 00:19:59.126 Data Area 4 for Telemetry Log: Not Supported 00:19:59.126 Error Log Page Entries Supported: 128 00:19:59.126 Keep Alive: Not Supported 00:19:59.126 00:19:59.126 NVM Command Set Attributes 00:19:59.126 ========================== 00:19:59.126 Submission Queue Entry Size 00:19:59.126 Max: 1 00:19:59.126 Min: 1 00:19:59.126 Completion Queue Entry Size 00:19:59.126 Max: 1 00:19:59.126 Min: 1 00:19:59.126 Number of Namespaces: 0 00:19:59.126 Compare Command: Not Supported 00:19:59.126 Write Uncorrectable Command: Not Supported 00:19:59.126 Dataset Management Command: Not Supported 00:19:59.126 Write Zeroes Command: Not Supported 00:19:59.126 Set Features Save Field: Not Supported 00:19:59.126 Reservations: Not Supported 00:19:59.126 Timestamp: Not Supported 00:19:59.126 Copy: Not Supported 00:19:59.126 Volatile Write Cache: Not Present 00:19:59.126 Atomic Write Unit (Normal): 1 00:19:59.126 Atomic Write Unit (PFail): 1 00:19:59.126 Atomic Compare & Write Unit: 1 00:19:59.126 Fused Compare & Write: Supported 00:19:59.126 Scatter-Gather List 00:19:59.126 SGL Command Set: Supported 00:19:59.126 SGL Keyed: Supported 00:19:59.126 SGL Bit Bucket Descriptor: Not Supported 00:19:59.126 SGL Metadata Pointer: Not Supported 00:19:59.126 Oversized SGL: Not Supported 00:19:59.126 SGL Metadata Address: Not Supported 00:19:59.126 SGL Offset: Supported 00:19:59.126 Transport SGL Data Block: Not Supported 00:19:59.126 Replay Protected Memory Block: Not Supported 00:19:59.126 00:19:59.126 Firmware Slot Information 00:19:59.126 ========================= 00:19:59.126 Active slot: 0 00:19:59.126 00:19:59.126 00:19:59.126 Error Log 00:19:59.126 ========= 00:19:59.126 00:19:59.126 Active Namespaces 00:19:59.126 ================= 00:19:59.126 Discovery Log Page 00:19:59.126 ================== 00:19:59.126 Generation Counter: 2 00:19:59.126 Number of Records: 2 00:19:59.126 Record Format: 0 00:19:59.126 00:19:59.126 Discovery Log Entry 0 00:19:59.126 ---------------------- 00:19:59.126 Transport Type: 3 (TCP) 00:19:59.126 Address Family: 1 (IPv4) 00:19:59.126 Subsystem Type: 3 (Current Discovery Subsystem) 00:19:59.126 Entry Flags: 00:19:59.126 Duplicate Returned Information: 1 00:19:59.126 Explicit Persistent Connection Support for Discovery: 1 00:19:59.126 Transport Requirements: 00:19:59.126 Secure Channel: Not Required 00:19:59.126 Port ID: 0 (0x0000) 00:19:59.126 Controller ID: 65535 (0xffff) 00:19:59.126 Admin Max SQ Size: 128 00:19:59.126 Transport Service Identifier: 4420 00:19:59.126 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:19:59.126 Transport Address: 10.0.0.2 00:19:59.126 Discovery Log Entry 1 00:19:59.126 ---------------------- 00:19:59.126 Transport Type: 3 (TCP) 00:19:59.126 Address Family: 1 (IPv4) 00:19:59.126 Subsystem Type: 2 (NVM Subsystem) 00:19:59.126 Entry Flags: 00:19:59.126 Duplicate Returned Information: 0 00:19:59.126 Explicit Persistent Connection Support for Discovery: 0 00:19:59.127 Transport Requirements: 00:19:59.127 Secure Channel: Not Required 00:19:59.127 Port ID: 0 (0x0000) 00:19:59.127 Controller ID: 65535 (0xffff) 00:19:59.127 Admin Max SQ Size: 128 00:19:59.127 Transport Service Identifier: 4420 00:19:59.127 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:19:59.127 Transport Address: 10.0.0.2 [2024-11-16 16:39:36.406334] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:19:59.127 [2024-11-16 16:39:36.406351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.127 [2024-11-16 16:39:36.406358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.127 [2024-11-16 16:39:36.406364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.127 [2024-11-16 16:39:36.406369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.127 [2024-11-16 16:39:36.406378] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:59.127 [2024-11-16 16:39:36.406382] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:59.127 [2024-11-16 16:39:36.406386] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xef2510) 00:19:59.127 [2024-11-16 16:39:36.406394] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.127 [2024-11-16 16:39:36.406419] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf3ecc0, cid 3, qid 0 00:19:59.127 [2024-11-16 16:39:36.406484] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:59.127 [2024-11-16 16:39:36.406491] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:59.127 [2024-11-16 16:39:36.406495] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:59.127 [2024-11-16 16:39:36.406498] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf3ecc0) on tqpair=0xef2510 00:19:59.127 [2024-11-16 16:39:36.406506] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:59.127 [2024-11-16 16:39:36.406510] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:59.127 [2024-11-16 16:39:36.406514] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xef2510) 00:19:59.127 [2024-11-16 16:39:36.406521] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.127 [2024-11-16 16:39:36.406543] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf3ecc0, cid 3, qid 0 00:19:59.127 [2024-11-16 16:39:36.406611] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:59.127 [2024-11-16 16:39:36.406617] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:59.127 [2024-11-16 16:39:36.406621] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:59.127 [2024-11-16 16:39:36.406624] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf3ecc0) on tqpair=0xef2510 00:19:59.127 [2024-11-16 16:39:36.406629] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:19:59.127 [2024-11-16 16:39:36.406634] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:19:59.127 [2024-11-16 16:39:36.406643] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:59.127 [2024-11-16 16:39:36.406648] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:59.127 [2024-11-16 16:39:36.406651] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xef2510) 00:19:59.127 [2024-11-16 16:39:36.406658] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.127 [2024-11-16 16:39:36.406677] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf3ecc0, cid 3, qid 0 00:19:59.127 [2024-11-16 16:39:36.406732] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:59.127 [2024-11-16 16:39:36.406738] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:59.127 [2024-11-16 16:39:36.406742] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:59.127 [2024-11-16 16:39:36.406745] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf3ecc0) on tqpair=0xef2510 00:19:59.127 [2024-11-16 16:39:36.406756] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:59.127 [2024-11-16 16:39:36.406760] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:59.127 [2024-11-16 16:39:36.406764] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xef2510) 00:19:59.127 [2024-11-16 16:39:36.406770] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.127 [2024-11-16 16:39:36.406788] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf3ecc0, cid 3, qid 0 00:19:59.127 [2024-11-16 16:39:36.406844] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:59.127 [2024-11-16 16:39:36.406850] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:59.127 [2024-11-16 16:39:36.406853] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:59.127 [2024-11-16 16:39:36.406857] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf3ecc0) on tqpair=0xef2510 00:19:59.127 [2024-11-16 16:39:36.406867] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:59.127 [2024-11-16 16:39:36.406871] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:59.127 [2024-11-16 16:39:36.406874] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xef2510) 00:19:59.127 [2024-11-16 16:39:36.406881] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.127 [2024-11-16 16:39:36.406899] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf3ecc0, cid 3, qid 0 00:19:59.127 [2024-11-16 16:39:36.406952] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:59.127 [2024-11-16 16:39:36.406958] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:59.127 [2024-11-16 16:39:36.406962] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:59.127 [2024-11-16 16:39:36.406966] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf3ecc0) on tqpair=0xef2510 00:19:59.127 [2024-11-16 16:39:36.406976] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:59.127 [2024-11-16 16:39:36.406980] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:59.127 [2024-11-16 16:39:36.406984] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xef2510) 00:19:59.127 [2024-11-16 16:39:36.406990] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.127 [2024-11-16 16:39:36.407008] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf3ecc0, cid 3, qid 0 00:19:59.127 [2024-11-16 16:39:36.407089] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:59.127 [2024-11-16 16:39:36.407096] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:59.127 [2024-11-16 16:39:36.407100] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:59.127 [2024-11-16 16:39:36.407104] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf3ecc0) on tqpair=0xef2510 00:19:59.127 [2024-11-16 16:39:36.407115] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:59.127 [2024-11-16 16:39:36.407119] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:59.127 [2024-11-16 16:39:36.407122] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xef2510) 00:19:59.127 [2024-11-16 16:39:36.407129] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.127 [2024-11-16 16:39:36.407149] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf3ecc0, cid 3, qid 0 00:19:59.127 [2024-11-16 16:39:36.407218] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:59.127 [2024-11-16 16:39:36.407224] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:59.127 [2024-11-16 16:39:36.407227] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:59.127 [2024-11-16 16:39:36.407231] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf3ecc0) on tqpair=0xef2510 00:19:59.127 [2024-11-16 16:39:36.407241] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:59.127 [2024-11-16 16:39:36.407245] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:59.127 [2024-11-16 16:39:36.407249] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xef2510) 00:19:59.127 [2024-11-16 16:39:36.407255] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.127 [2024-11-16 16:39:36.407273] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf3ecc0, cid 3, qid 0 00:19:59.127 [2024-11-16 16:39:36.407332] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:59.127 [2024-11-16 16:39:36.407338] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:59.127 [2024-11-16 16:39:36.407341] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:59.127 [2024-11-16 16:39:36.407345] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf3ecc0) on tqpair=0xef2510 00:19:59.127 [2024-11-16 16:39:36.407355] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:59.127 [2024-11-16 16:39:36.407359] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:59.127 [2024-11-16 16:39:36.407363] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xef2510) 00:19:59.127 [2024-11-16 16:39:36.407369] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.127 [2024-11-16 16:39:36.407387] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf3ecc0, cid 3, qid 0 00:19:59.127 [2024-11-16 16:39:36.407435] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:59.127 [2024-11-16 16:39:36.407441] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:59.127 [2024-11-16 16:39:36.407445] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:59.127 [2024-11-16 16:39:36.407448] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf3ecc0) on tqpair=0xef2510 00:19:59.127 [2024-11-16 16:39:36.407460] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:59.127 [2024-11-16 16:39:36.407464] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:59.127 [2024-11-16 16:39:36.407468] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xef2510) 00:19:59.127 [2024-11-16 16:39:36.407474] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.127 [2024-11-16 16:39:36.407492] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf3ecc0, cid 3, qid 0 00:19:59.127 [2024-11-16 16:39:36.407555] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:59.127 [2024-11-16 16:39:36.407561] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:59.127 [2024-11-16 16:39:36.407565] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:59.127 [2024-11-16 16:39:36.407568] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf3ecc0) on tqpair=0xef2510 00:19:59.128 [2024-11-16 16:39:36.407578] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:59.128 [2024-11-16 16:39:36.407582] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:59.128 [2024-11-16 16:39:36.407586] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xef2510) 00:19:59.128 [2024-11-16 16:39:36.407592] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.128 [2024-11-16 16:39:36.407610] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf3ecc0, cid 3, qid 0 00:19:59.128 [2024-11-16 16:39:36.407679] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:59.128 [2024-11-16 16:39:36.407685] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:59.128 [2024-11-16 16:39:36.407689] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:59.128 [2024-11-16 16:39:36.407692] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf3ecc0) on tqpair=0xef2510 00:19:59.128 [2024-11-16 16:39:36.407702] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:59.128 [2024-11-16 16:39:36.407706] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:59.128 [2024-11-16 16:39:36.407710] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xef2510) 00:19:59.128 [2024-11-16 16:39:36.407716] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.128 [2024-11-16 16:39:36.407733] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf3ecc0, cid 3, qid 0 00:19:59.128 [2024-11-16 16:39:36.407787] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:59.128 [2024-11-16 16:39:36.407793] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:59.128 [2024-11-16 16:39:36.407796] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:59.128 [2024-11-16 16:39:36.407800] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf3ecc0) on tqpair=0xef2510 00:19:59.128 [2024-11-16 16:39:36.407810] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:59.128 [2024-11-16 16:39:36.407814] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:59.128 [2024-11-16 16:39:36.407818] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xef2510) 00:19:59.128 [2024-11-16 16:39:36.407824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.128 [2024-11-16 16:39:36.407842] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf3ecc0, cid 3, qid 0 00:19:59.128 [2024-11-16 16:39:36.407894] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:59.128 [2024-11-16 16:39:36.407900] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:59.128 [2024-11-16 16:39:36.407904] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:59.128 [2024-11-16 16:39:36.407907] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf3ecc0) on tqpair=0xef2510 00:19:59.128 [2024-11-16 16:39:36.407917] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:59.128 [2024-11-16 16:39:36.407921] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:59.128 [2024-11-16 16:39:36.407925] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xef2510) 00:19:59.128 [2024-11-16 16:39:36.407931] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.128 [2024-11-16 16:39:36.407949] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf3ecc0, cid 3, qid 0 00:19:59.128 [2024-11-16 16:39:36.408014] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:59.128 [2024-11-16 16:39:36.408020] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:59.128 [2024-11-16 16:39:36.408024] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:59.128 [2024-11-16 16:39:36.408027] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf3ecc0) on tqpair=0xef2510 00:19:59.128 [2024-11-16 16:39:36.408037] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:59.128 [2024-11-16 16:39:36.408041] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:59.128 [2024-11-16 16:39:36.408045] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xef2510) 00:19:59.128 [2024-11-16 16:39:36.408051] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.128 [2024-11-16 16:39:36.411135] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf3ecc0, cid 3, qid 0 00:19:59.128 [2024-11-16 16:39:36.411191] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:59.128 [2024-11-16 16:39:36.411198] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:59.128 [2024-11-16 16:39:36.411201] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:59.128 [2024-11-16 16:39:36.411205] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf3ecc0) on tqpair=0xef2510 00:19:59.128 [2024-11-16 16:39:36.411213] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:19:59.128 00:19:59.128 16:39:36 -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:19:59.128 [2024-11-16 16:39:36.441823] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:59.128 [2024-11-16 16:39:36.441883] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93737 ] 00:19:59.128 [2024-11-16 16:39:36.580571] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:19:59.128 [2024-11-16 16:39:36.580642] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:19:59.128 [2024-11-16 16:39:36.580648] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:19:59.128 [2024-11-16 16:39:36.580658] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:19:59.128 [2024-11-16 16:39:36.580667] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:19:59.128 [2024-11-16 16:39:36.580759] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:19:59.128 [2024-11-16 16:39:36.580803] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1060510 0 00:19:59.128 [2024-11-16 16:39:36.586089] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:19:59.128 [2024-11-16 16:39:36.586111] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:19:59.128 [2024-11-16 16:39:36.586131] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:19:59.128 [2024-11-16 16:39:36.586135] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:19:59.128 [2024-11-16 16:39:36.586177] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:59.128 [2024-11-16 16:39:36.586183] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:59.128 [2024-11-16 16:39:36.586187] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1060510) 00:19:59.128 [2024-11-16 16:39:36.586197] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:19:59.128 [2024-11-16 16:39:36.586226] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ac8a0, cid 0, qid 0 00:19:59.128 [2024-11-16 16:39:36.594073] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:59.128 [2024-11-16 16:39:36.594094] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:59.128 [2024-11-16 16:39:36.594114] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:59.128 [2024-11-16 16:39:36.594118] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10ac8a0) on tqpair=0x1060510 00:19:59.128 [2024-11-16 16:39:36.594127] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:19:59.128 [2024-11-16 16:39:36.594133] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:19:59.128 [2024-11-16 16:39:36.594139] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:19:59.128 [2024-11-16 16:39:36.594153] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:59.128 [2024-11-16 16:39:36.594158] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:59.128 [2024-11-16 16:39:36.594161] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1060510) 00:19:59.128 [2024-11-16 16:39:36.594169] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.128 [2024-11-16 16:39:36.594195] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ac8a0, cid 0, qid 0 00:19:59.128 [2024-11-16 16:39:36.594264] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:59.128 [2024-11-16 16:39:36.594271] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:59.128 [2024-11-16 16:39:36.594274] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:59.128 [2024-11-16 16:39:36.594278] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10ac8a0) on tqpair=0x1060510 00:19:59.128 [2024-11-16 16:39:36.594283] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:19:59.128 [2024-11-16 16:39:36.594290] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:19:59.128 [2024-11-16 16:39:36.594297] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:59.128 [2024-11-16 16:39:36.594301] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:59.128 [2024-11-16 16:39:36.594304] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1060510) 00:19:59.128 [2024-11-16 16:39:36.594311] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.128 [2024-11-16 16:39:36.594330] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ac8a0, cid 0, qid 0 00:19:59.128 [2024-11-16 16:39:36.594646] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:59.128 [2024-11-16 16:39:36.594661] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:59.129 [2024-11-16 16:39:36.594665] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:59.129 [2024-11-16 16:39:36.594669] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10ac8a0) on tqpair=0x1060510 00:19:59.129 [2024-11-16 16:39:36.594675] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:19:59.129 [2024-11-16 16:39:36.594684] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:19:59.129 [2024-11-16 16:39:36.594691] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:59.129 [2024-11-16 16:39:36.594695] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:59.129 [2024-11-16 16:39:36.594698] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1060510) 00:19:59.129 [2024-11-16 16:39:36.594705] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.129 [2024-11-16 16:39:36.594725] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ac8a0, cid 0, qid 0 00:19:59.129 [2024-11-16 16:39:36.595235] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:59.129 [2024-11-16 16:39:36.595248] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:59.129 [2024-11-16 16:39:36.595268] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:59.129 [2024-11-16 16:39:36.595272] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10ac8a0) on tqpair=0x1060510 00:19:59.129 [2024-11-16 16:39:36.595278] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:59.129 [2024-11-16 16:39:36.595288] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:59.129 [2024-11-16 16:39:36.595292] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:59.129 [2024-11-16 16:39:36.595296] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1060510) 00:19:59.129 [2024-11-16 16:39:36.595303] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.129 [2024-11-16 16:39:36.595324] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ac8a0, cid 0, qid 0 00:19:59.129 [2024-11-16 16:39:36.595409] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:59.129 [2024-11-16 16:39:36.595415] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:59.129 [2024-11-16 16:39:36.595419] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:59.129 [2024-11-16 16:39:36.595422] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10ac8a0) on tqpair=0x1060510 00:19:59.129 [2024-11-16 16:39:36.595427] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:19:59.129 [2024-11-16 16:39:36.595432] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:19:59.129 [2024-11-16 16:39:36.595438] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:59.129 [2024-11-16 16:39:36.595543] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:19:59.129 [2024-11-16 16:39:36.595547] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:59.129 [2024-11-16 16:39:36.595555] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:59.129 [2024-11-16 16:39:36.595559] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:59.129 [2024-11-16 16:39:36.595562] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1060510) 00:19:59.129 [2024-11-16 16:39:36.595569] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.129 [2024-11-16 16:39:36.595589] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ac8a0, cid 0, qid 0 00:19:59.129 [2024-11-16 16:39:36.595688] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:59.129 [2024-11-16 16:39:36.595694] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:59.129 [2024-11-16 16:39:36.595698] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:59.129 [2024-11-16 16:39:36.595702] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10ac8a0) on tqpair=0x1060510 00:19:59.129 [2024-11-16 16:39:36.595707] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:59.129 [2024-11-16 16:39:36.595717] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:59.129 [2024-11-16 16:39:36.595721] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:59.129 [2024-11-16 16:39:36.595724] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1060510) 00:19:59.129 [2024-11-16 16:39:36.595731] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.129 [2024-11-16 16:39:36.595750] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ac8a0, cid 0, qid 0 00:19:59.129 [2024-11-16 16:39:36.596131] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:59.129 [2024-11-16 16:39:36.596145] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:59.129 [2024-11-16 16:39:36.596149] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:59.129 [2024-11-16 16:39:36.596169] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10ac8a0) on tqpair=0x1060510 00:19:59.129 [2024-11-16 16:39:36.596174] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:59.129 [2024-11-16 16:39:36.596179] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:19:59.129 [2024-11-16 16:39:36.596187] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:19:59.129 [2024-11-16 16:39:36.596202] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:19:59.129 [2024-11-16 16:39:36.596211] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:59.129 [2024-11-16 16:39:36.596215] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:59.129 [2024-11-16 16:39:36.596219] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1060510) 00:19:59.129 [2024-11-16 16:39:36.596226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.129 [2024-11-16 16:39:36.596250] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ac8a0, cid 0, qid 0 00:19:59.129 [2024-11-16 16:39:36.596379] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:59.129 [2024-11-16 16:39:36.596385] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:59.129 [2024-11-16 16:39:36.596389] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:59.129 [2024-11-16 16:39:36.596392] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1060510): datao=0, datal=4096, cccid=0 00:19:59.129 [2024-11-16 16:39:36.596397] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10ac8a0) on tqpair(0x1060510): expected_datao=0, payload_size=4096 00:19:59.129 [2024-11-16 16:39:36.596404] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:59.129 [2024-11-16 16:39:36.596408] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:59.129 [2024-11-16 16:39:36.596416] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:59.129 [2024-11-16 16:39:36.596421] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:59.129 [2024-11-16 16:39:36.596439] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:59.129 [2024-11-16 16:39:36.596443] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10ac8a0) on tqpair=0x1060510 00:19:59.129 [2024-11-16 16:39:36.596452] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:19:59.129 [2024-11-16 16:39:36.596457] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:19:59.129 [2024-11-16 16:39:36.596461] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:19:59.129 [2024-11-16 16:39:36.596465] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:19:59.129 [2024-11-16 16:39:36.596469] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:19:59.129 [2024-11-16 16:39:36.596474] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:19:59.129 [2024-11-16 16:39:36.596487] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:19:59.129 [2024-11-16 16:39:36.596494] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:59.129 [2024-11-16 16:39:36.596498] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:59.129 [2024-11-16 16:39:36.596502] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1060510) 00:19:59.129 [2024-11-16 16:39:36.596509] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:59.129 [2024-11-16 16:39:36.596546] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ac8a0, cid 0, qid 0 00:19:59.129 [2024-11-16 16:39:36.596611] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:59.129 [2024-11-16 16:39:36.596618] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:59.129 [2024-11-16 16:39:36.596635] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:59.129 [2024-11-16 16:39:36.596639] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10ac8a0) on tqpair=0x1060510 00:19:59.129 [2024-11-16 16:39:36.596646] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:59.129 [2024-11-16 16:39:36.596650] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:59.129 [2024-11-16 16:39:36.596654] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1060510) 00:19:59.129 [2024-11-16 16:39:36.596660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.129 [2024-11-16 16:39:36.596666] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:59.129 [2024-11-16 16:39:36.596669] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:59.129 [2024-11-16 16:39:36.596673] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1060510) 00:19:59.129 [2024-11-16 16:39:36.596678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.129 [2024-11-16 16:39:36.596683] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:59.129 [2024-11-16 16:39:36.596687] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:59.129 [2024-11-16 16:39:36.596690] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1060510) 00:19:59.129 [2024-11-16 16:39:36.596695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.129 [2024-11-16 16:39:36.596701] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:59.129 [2024-11-16 16:39:36.596704] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:59.129 [2024-11-16 16:39:36.596708] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1060510) 00:19:59.130 [2024-11-16 16:39:36.596713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.130 [2024-11-16 16:39:36.596717] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:19:59.130 [2024-11-16 16:39:36.596730] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:59.130 [2024-11-16 16:39:36.596737] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:59.130 [2024-11-16 16:39:36.596741] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:59.130 [2024-11-16 16:39:36.596745] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1060510) 00:19:59.130 [2024-11-16 16:39:36.596751] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.130 [2024-11-16 16:39:36.596773] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ac8a0, cid 0, qid 0 00:19:59.130 [2024-11-16 16:39:36.596780] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10aca00, cid 1, qid 0 00:19:59.130 [2024-11-16 16:39:36.596785] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10acb60, cid 2, qid 0 00:19:59.130 [2024-11-16 16:39:36.596789] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10accc0, cid 3, qid 0 00:19:59.130 [2024-11-16 16:39:36.596794] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ace20, cid 4, qid 0 00:19:59.130 [2024-11-16 16:39:36.597218] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:59.130 [2024-11-16 16:39:36.597234] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:59.130 [2024-11-16 16:39:36.597238] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:59.130 [2024-11-16 16:39:36.597242] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10ace20) on tqpair=0x1060510 00:19:59.130 [2024-11-16 16:39:36.597248] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:19:59.130 [2024-11-16 16:39:36.597253] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:19:59.130 [2024-11-16 16:39:36.597262] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:19:59.130 [2024-11-16 16:39:36.597274] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:19:59.130 [2024-11-16 16:39:36.597281] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:59.130 [2024-11-16 16:39:36.597285] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:59.130 [2024-11-16 16:39:36.597289] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1060510) 00:19:59.130 [2024-11-16 16:39:36.597296] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:59.130 [2024-11-16 16:39:36.597319] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ace20, cid 4, qid 0 00:19:59.130 [2024-11-16 16:39:36.597397] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:59.130 [2024-11-16 16:39:36.597404] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:59.130 [2024-11-16 16:39:36.597407] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:59.130 [2024-11-16 16:39:36.597411] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10ace20) on tqpair=0x1060510 00:19:59.130 [2024-11-16 16:39:36.597467] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:19:59.130 [2024-11-16 16:39:36.597478] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:19:59.130 [2024-11-16 16:39:36.597486] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:59.130 [2024-11-16 16:39:36.597489] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:59.130 [2024-11-16 16:39:36.597493] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1060510) 00:19:59.130 [2024-11-16 16:39:36.597500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.130 [2024-11-16 16:39:36.597533] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ace20, cid 4, qid 0 00:19:59.130 [2024-11-16 16:39:36.597753] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:59.130 [2024-11-16 16:39:36.597767] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:59.130 [2024-11-16 16:39:36.597771] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:59.130 [2024-11-16 16:39:36.597775] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1060510): datao=0, datal=4096, cccid=4 00:19:59.130 [2024-11-16 16:39:36.597779] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10ace20) on tqpair(0x1060510): expected_datao=0, payload_size=4096 00:19:59.130 [2024-11-16 16:39:36.597787] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:59.130 [2024-11-16 16:39:36.597791] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:59.130 [2024-11-16 16:39:36.597812] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:59.130 [2024-11-16 16:39:36.597818] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:59.130 [2024-11-16 16:39:36.597821] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:59.130 [2024-11-16 16:39:36.597825] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10ace20) on tqpair=0x1060510 00:19:59.130 [2024-11-16 16:39:36.597842] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:19:59.130 [2024-11-16 16:39:36.597853] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:19:59.130 [2024-11-16 16:39:36.597863] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:19:59.130 [2024-11-16 16:39:36.597871] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:59.130 [2024-11-16 16:39:36.597874] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:59.130 [2024-11-16 16:39:36.597878] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1060510) 00:19:59.130 [2024-11-16 16:39:36.597885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.130 [2024-11-16 16:39:36.597906] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ace20, cid 4, qid 0 00:19:59.130 [2024-11-16 16:39:36.602087] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:59.130 [2024-11-16 16:39:36.602105] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:59.130 [2024-11-16 16:39:36.602125] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:59.130 [2024-11-16 16:39:36.602129] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1060510): datao=0, datal=4096, cccid=4 00:19:59.130 [2024-11-16 16:39:36.602133] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10ace20) on tqpair(0x1060510): expected_datao=0, payload_size=4096 00:19:59.130 [2024-11-16 16:39:36.602140] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:59.130 [2024-11-16 16:39:36.602144] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:59.130 [2024-11-16 16:39:36.602150] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:59.130 [2024-11-16 16:39:36.602155] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:59.130 [2024-11-16 16:39:36.602158] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:59.130 [2024-11-16 16:39:36.602162] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10ace20) on tqpair=0x1060510 00:19:59.130 [2024-11-16 16:39:36.602181] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:19:59.130 [2024-11-16 16:39:36.602194] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:19:59.130 [2024-11-16 16:39:36.602202] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:59.130 [2024-11-16 16:39:36.602206] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:59.130 [2024-11-16 16:39:36.602210] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1060510) 00:19:59.130 [2024-11-16 16:39:36.602217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.130 [2024-11-16 16:39:36.602242] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ace20, cid 4, qid 0 00:19:59.130 [2024-11-16 16:39:36.602312] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:59.130 [2024-11-16 16:39:36.602318] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:59.130 [2024-11-16 16:39:36.602322] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:59.130 [2024-11-16 16:39:36.602325] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1060510): datao=0, datal=4096, cccid=4 00:19:59.130 [2024-11-16 16:39:36.602329] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10ace20) on tqpair(0x1060510): expected_datao=0, payload_size=4096 00:19:59.130 [2024-11-16 16:39:36.602336] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:59.130 [2024-11-16 16:39:36.602340] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:59.130 [2024-11-16 16:39:36.602347] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:59.131 [2024-11-16 16:39:36.602368] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:59.131 [2024-11-16 16:39:36.602371] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:59.131 [2024-11-16 16:39:36.602375] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10ace20) on tqpair=0x1060510 00:19:59.131 [2024-11-16 16:39:36.602384] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:19:59.131 [2024-11-16 16:39:36.602392] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:19:59.131 [2024-11-16 16:39:36.602404] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:19:59.131 [2024-11-16 16:39:36.602410] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:19:59.131 [2024-11-16 16:39:36.602415] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:19:59.131 [2024-11-16 16:39:36.602420] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:19:59.131 [2024-11-16 16:39:36.602425] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:19:59.131 [2024-11-16 16:39:36.602430] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:19:59.131 [2024-11-16 16:39:36.602444] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:59.131 [2024-11-16 16:39:36.602448] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:59.131 [2024-11-16 16:39:36.602452] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1060510) 00:19:59.131 [2024-11-16 16:39:36.602459] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.131 [2024-11-16 16:39:36.602465] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:59.131 [2024-11-16 16:39:36.602469] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:59.131 [2024-11-16 16:39:36.602472] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1060510) 00:19:59.131 [2024-11-16 16:39:36.602478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.131 [2024-11-16 16:39:36.602503] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ace20, cid 4, qid 0 00:19:59.131 [2024-11-16 16:39:36.602511] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10acf80, cid 5, qid 0 00:19:59.131 [2024-11-16 16:39:36.602923] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:59.131 [2024-11-16 16:39:36.602938] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:59.131 [2024-11-16 16:39:36.602943] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:59.131 [2024-11-16 16:39:36.602946] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10ace20) on tqpair=0x1060510 00:19:59.131 [2024-11-16 16:39:36.602966] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:59.131 [2024-11-16 16:39:36.602971] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:59.131 [2024-11-16 16:39:36.602975] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:59.131 [2024-11-16 16:39:36.602979] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10acf80) on tqpair=0x1060510 00:19:59.131 [2024-11-16 16:39:36.602989] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:59.131 [2024-11-16 16:39:36.602994] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:59.131 [2024-11-16 16:39:36.602997] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1060510) 00:19:59.131 [2024-11-16 16:39:36.603004] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.131 [2024-11-16 16:39:36.603024] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10acf80, cid 5, qid 0 00:19:59.131 [2024-11-16 16:39:36.603184] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:59.131 [2024-11-16 16:39:36.603192] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:59.131 [2024-11-16 16:39:36.603195] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:59.131 [2024-11-16 16:39:36.603199] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10acf80) on tqpair=0x1060510 00:19:59.131 [2024-11-16 16:39:36.603209] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:59.131 [2024-11-16 16:39:36.603213] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:59.131 [2024-11-16 16:39:36.603217] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1060510) 00:19:59.131 [2024-11-16 16:39:36.603223] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.131 [2024-11-16 16:39:36.603243] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10acf80, cid 5, qid 0 00:19:59.131 [2024-11-16 16:39:36.603585] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:59.131 [2024-11-16 16:39:36.603599] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:59.131 [2024-11-16 16:39:36.603603] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:59.131 [2024-11-16 16:39:36.603607] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10acf80) on tqpair=0x1060510 00:19:59.131 [2024-11-16 16:39:36.603618] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:59.131 [2024-11-16 16:39:36.603622] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:59.131 [2024-11-16 16:39:36.603640] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1060510) 00:19:59.131 [2024-11-16 16:39:36.603647] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.131 [2024-11-16 16:39:36.603666] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10acf80, cid 5, qid 0 00:19:59.131 [2024-11-16 16:39:36.604058] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:59.131 [2024-11-16 16:39:36.604091] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:59.131 [2024-11-16 16:39:36.604096] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:59.131 [2024-11-16 16:39:36.604100] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10acf80) on tqpair=0x1060510 00:19:59.131 [2024-11-16 16:39:36.604115] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:59.131 [2024-11-16 16:39:36.604120] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:59.131 [2024-11-16 16:39:36.604123] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1060510) 00:19:59.131 [2024-11-16 16:39:36.604130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.131 [2024-11-16 16:39:36.604137] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:59.131 [2024-11-16 16:39:36.604141] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:59.131 [2024-11-16 16:39:36.604144] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1060510) 00:19:59.131 [2024-11-16 16:39:36.604164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.131 [2024-11-16 16:39:36.604170] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:59.131 [2024-11-16 16:39:36.604174] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:59.131 [2024-11-16 16:39:36.604177] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1060510) 00:19:59.131 [2024-11-16 16:39:36.604182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.131 [2024-11-16 16:39:36.604189] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:59.131 [2024-11-16 16:39:36.604192] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:59.131 [2024-11-16 16:39:36.604195] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1060510) 00:19:59.131 [2024-11-16 16:39:36.604201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.131 [2024-11-16 16:39:36.604239] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10acf80, cid 5, qid 0 00:19:59.131 [2024-11-16 16:39:36.604246] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ace20, cid 4, qid 0 00:19:59.131 [2024-11-16 16:39:36.604251] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ad0e0, cid 6, qid 0 00:19:59.131 [2024-11-16 16:39:36.604255] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ad240, cid 7, qid 0 00:19:59.131 [2024-11-16 16:39:36.604715] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:59.131 [2024-11-16 16:39:36.604729] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:59.131 [2024-11-16 16:39:36.604734] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:59.131 [2024-11-16 16:39:36.604737] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1060510): datao=0, datal=8192, cccid=5 00:19:59.131 [2024-11-16 16:39:36.604742] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10acf80) on tqpair(0x1060510): expected_datao=0, payload_size=8192 00:19:59.131 [2024-11-16 16:39:36.604773] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:59.131 [2024-11-16 16:39:36.604778] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:59.131 [2024-11-16 16:39:36.604784] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:59.131 [2024-11-16 16:39:36.604789] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:59.131 [2024-11-16 16:39:36.604793] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:59.131 [2024-11-16 16:39:36.604796] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1060510): datao=0, datal=512, cccid=4 00:19:59.131 [2024-11-16 16:39:36.604800] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10ace20) on tqpair(0x1060510): expected_datao=0, payload_size=512 00:19:59.131 [2024-11-16 16:39:36.604806] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:59.131 [2024-11-16 16:39:36.604810] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:59.131 [2024-11-16 16:39:36.604815] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:59.131 [2024-11-16 16:39:36.604820] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:59.131 [2024-11-16 16:39:36.604823] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:59.131 [2024-11-16 16:39:36.604826] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1060510): datao=0, datal=512, cccid=6 00:19:59.131 [2024-11-16 16:39:36.604830] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10ad0e0) on tqpair(0x1060510): expected_datao=0, payload_size=512 00:19:59.131 [2024-11-16 16:39:36.604836] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:59.131 [2024-11-16 16:39:36.604840] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:59.131 [2024-11-16 16:39:36.604845] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:59.131 [2024-11-16 16:39:36.604850] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:59.131 [2024-11-16 16:39:36.604853] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:59.132 [2024-11-16 16:39:36.604856] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1060510): datao=0, datal=4096, cccid=7 00:19:59.132 [2024-11-16 16:39:36.604860] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10ad240) on tqpair(0x1060510): expected_datao=0, payload_size=4096 00:19:59.132 [2024-11-16 16:39:36.604866] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:59.132 [2024-11-16 16:39:36.604870] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:59.132 [2024-11-16 16:39:36.604875] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:59.132 [2024-11-16 16:39:36.604880] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:59.132 [2024-11-16 16:39:36.604899] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:59.132 [2024-11-16 16:39:36.604903] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10acf80) on tqpair=0x1060510 00:19:59.132 [2024-11-16 16:39:36.604920] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:59.132 [2024-11-16 16:39:36.604927] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:59.132 [2024-11-16 16:39:36.604930] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:59.132 [2024-11-16 16:39:36.604934] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10ace20) on tqpair=0x1060510 00:19:59.132 [2024-11-16 16:39:36.604944] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:59.132 [2024-11-16 16:39:36.604950] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:59.132 ===================================================== 00:19:59.132 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:59.132 ===================================================== 00:19:59.132 Controller Capabilities/Features 00:19:59.132 ================================ 00:19:59.132 Vendor ID: 8086 00:19:59.132 Subsystem Vendor ID: 8086 00:19:59.132 Serial Number: SPDK00000000000001 00:19:59.132 Model Number: SPDK bdev Controller 00:19:59.132 Firmware Version: 24.01.1 00:19:59.132 Recommended Arb Burst: 6 00:19:59.132 IEEE OUI Identifier: e4 d2 5c 00:19:59.132 Multi-path I/O 00:19:59.132 May have multiple subsystem ports: Yes 00:19:59.132 May have multiple controllers: Yes 00:19:59.132 Associated with SR-IOV VF: No 00:19:59.132 Max Data Transfer Size: 131072 00:19:59.132 Max Number of Namespaces: 32 00:19:59.132 Max Number of I/O Queues: 127 00:19:59.132 NVMe Specification Version (VS): 1.3 00:19:59.132 NVMe Specification Version (Identify): 1.3 00:19:59.132 Maximum Queue Entries: 128 00:19:59.132 Contiguous Queues Required: Yes 00:19:59.132 Arbitration Mechanisms Supported 00:19:59.132 Weighted Round Robin: Not Supported 00:19:59.132 Vendor Specific: Not Supported 00:19:59.132 Reset Timeout: 15000 ms 00:19:59.132 Doorbell Stride: 4 bytes 00:19:59.132 NVM Subsystem Reset: Not Supported 00:19:59.132 Command Sets Supported 00:19:59.132 NVM Command Set: Supported 00:19:59.132 Boot Partition: Not Supported 00:19:59.132 Memory Page Size Minimum: 4096 bytes 00:19:59.132 Memory Page Size Maximum: 4096 bytes 00:19:59.132 Persistent Memory Region: Not Supported 00:19:59.132 Optional Asynchronous Events Supported 00:19:59.132 Namespace Attribute Notices: Supported 00:19:59.132 Firmware Activation Notices: Not Supported 00:19:59.132 ANA Change Notices: Not Supported 00:19:59.132 PLE Aggregate Log Change Notices: Not Supported 00:19:59.132 LBA Status Info Alert Notices: Not Supported 00:19:59.132 EGE Aggregate Log Change Notices: Not Supported 00:19:59.132 Normal NVM Subsystem Shutdown event: Not Supported 00:19:59.132 Zone Descriptor Change Notices: Not Supported 00:19:59.132 Discovery Log Change Notices: Not Supported 00:19:59.132 Controller Attributes 00:19:59.132 128-bit Host Identifier: Supported 00:19:59.132 Non-Operational Permissive Mode: Not Supported 00:19:59.132 NVM Sets: Not Supported 00:19:59.132 Read Recovery Levels: Not Supported 00:19:59.132 Endurance Groups: Not Supported 00:19:59.132 Predictable Latency Mode: Not Supported 00:19:59.132 Traffic Based Keep ALive: Not Supported 00:19:59.132 Namespace Granularity: Not Supported 00:19:59.132 SQ Associations: Not Supported 00:19:59.132 UUID List: Not Supported 00:19:59.132 Multi-Domain Subsystem: Not Supported 00:19:59.132 Fixed Capacity Management: Not Supported 00:19:59.132 Variable Capacity Management: Not Supported 00:19:59.132 Delete Endurance Group: Not Supported 00:19:59.132 Delete NVM Set: Not Supported 00:19:59.132 Extended LBA Formats Supported: Not Supported 00:19:59.132 Flexible Data Placement Supported: Not Supported 00:19:59.132 00:19:59.132 Controller Memory Buffer Support 00:19:59.132 ================================ 00:19:59.132 Supported: No 00:19:59.132 00:19:59.132 Persistent Memory Region Support 00:19:59.132 ================================ 00:19:59.132 Supported: No 00:19:59.132 00:19:59.132 Admin Command Set Attributes 00:19:59.132 ============================ 00:19:59.132 Security Send/Receive: Not Supported 00:19:59.132 Format NVM: Not Supported 00:19:59.132 Firmware Activate/Download: Not Supported 00:19:59.132 Namespace Management: Not Supported 00:19:59.132 Device Self-Test: Not Supported 00:19:59.132 Directives: Not Supported 00:19:59.132 NVMe-MI: Not Supported 00:19:59.132 Virtualization Management: Not Supported 00:19:59.132 Doorbell Buffer Config: Not Supported 00:19:59.132 Get LBA Status Capability: Not Supported 00:19:59.132 Command & Feature Lockdown Capability: Not Supported 00:19:59.132 Abort Command Limit: 4 00:19:59.132 Async Event Request Limit: 4 00:19:59.132 Number of Firmware Slots: N/A 00:19:59.132 Firmware Slot 1 Read-Only: N/A 00:19:59.132 Firmware Activation Without Reset: [2024-11-16 16:39:36.604953] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:59.132 [2024-11-16 16:39:36.604957] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10ad0e0) on tqpair=0x1060510 00:19:59.132 [2024-11-16 16:39:36.604964] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:59.132 [2024-11-16 16:39:36.604970] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:59.132 [2024-11-16 16:39:36.604973] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:59.132 [2024-11-16 16:39:36.604977] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10ad240) on tqpair=0x1060510 00:19:59.132 N/A 00:19:59.132 Multiple Update Detection Support: N/A 00:19:59.132 Firmware Update Granularity: No Information Provided 00:19:59.132 Per-Namespace SMART Log: No 00:19:59.132 Asymmetric Namespace Access Log Page: Not Supported 00:19:59.132 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:19:59.132 Command Effects Log Page: Supported 00:19:59.132 Get Log Page Extended Data: Supported 00:19:59.132 Telemetry Log Pages: Not Supported 00:19:59.132 Persistent Event Log Pages: Not Supported 00:19:59.132 Supported Log Pages Log Page: May Support 00:19:59.132 Commands Supported & Effects Log Page: Not Supported 00:19:59.132 Feature Identifiers & Effects Log Page:May Support 00:19:59.132 NVMe-MI Commands & Effects Log Page: May Support 00:19:59.132 Data Area 4 for Telemetry Log: Not Supported 00:19:59.132 Error Log Page Entries Supported: 128 00:19:59.132 Keep Alive: Supported 00:19:59.132 Keep Alive Granularity: 10000 ms 00:19:59.132 00:19:59.132 NVM Command Set Attributes 00:19:59.132 ========================== 00:19:59.132 Submission Queue Entry Size 00:19:59.132 Max: 64 00:19:59.132 Min: 64 00:19:59.132 Completion Queue Entry Size 00:19:59.132 Max: 16 00:19:59.132 Min: 16 00:19:59.132 Number of Namespaces: 32 00:19:59.132 Compare Command: Supported 00:19:59.132 Write Uncorrectable Command: Not Supported 00:19:59.132 Dataset Management Command: Supported 00:19:59.132 Write Zeroes Command: Supported 00:19:59.132 Set Features Save Field: Not Supported 00:19:59.132 Reservations: Supported 00:19:59.132 Timestamp: Not Supported 00:19:59.132 Copy: Supported 00:19:59.132 Volatile Write Cache: Present 00:19:59.132 Atomic Write Unit (Normal): 1 00:19:59.132 Atomic Write Unit (PFail): 1 00:19:59.132 Atomic Compare & Write Unit: 1 00:19:59.132 Fused Compare & Write: Supported 00:19:59.132 Scatter-Gather List 00:19:59.132 SGL Command Set: Supported 00:19:59.132 SGL Keyed: Supported 00:19:59.132 SGL Bit Bucket Descriptor: Not Supported 00:19:59.132 SGL Metadata Pointer: Not Supported 00:19:59.132 Oversized SGL: Not Supported 00:19:59.132 SGL Metadata Address: Not Supported 00:19:59.132 SGL Offset: Supported 00:19:59.132 Transport SGL Data Block: Not Supported 00:19:59.132 Replay Protected Memory Block: Not Supported 00:19:59.132 00:19:59.132 Firmware Slot Information 00:19:59.132 ========================= 00:19:59.132 Active slot: 1 00:19:59.132 Slot 1 Firmware Revision: 24.01.1 00:19:59.132 00:19:59.132 00:19:59.132 Commands Supported and Effects 00:19:59.133 ============================== 00:19:59.133 Admin Commands 00:19:59.133 -------------- 00:19:59.133 Get Log Page (02h): Supported 00:19:59.133 Identify (06h): Supported 00:19:59.133 Abort (08h): Supported 00:19:59.133 Set Features (09h): Supported 00:19:59.133 Get Features (0Ah): Supported 00:19:59.133 Asynchronous Event Request (0Ch): Supported 00:19:59.133 Keep Alive (18h): Supported 00:19:59.133 I/O Commands 00:19:59.133 ------------ 00:19:59.133 Flush (00h): Supported LBA-Change 00:19:59.133 Write (01h): Supported LBA-Change 00:19:59.133 Read (02h): Supported 00:19:59.133 Compare (05h): Supported 00:19:59.133 Write Zeroes (08h): Supported LBA-Change 00:19:59.133 Dataset Management (09h): Supported LBA-Change 00:19:59.133 Copy (19h): Supported LBA-Change 00:19:59.133 Unknown (79h): Supported LBA-Change 00:19:59.133 Unknown (7Ah): Supported 00:19:59.133 00:19:59.133 Error Log 00:19:59.133 ========= 00:19:59.133 00:19:59.133 Arbitration 00:19:59.133 =========== 00:19:59.133 Arbitration Burst: 1 00:19:59.133 00:19:59.133 Power Management 00:19:59.133 ================ 00:19:59.133 Number of Power States: 1 00:19:59.133 Current Power State: Power State #0 00:19:59.133 Power State #0: 00:19:59.133 Max Power: 0.00 W 00:19:59.133 Non-Operational State: Operational 00:19:59.133 Entry Latency: Not Reported 00:19:59.133 Exit Latency: Not Reported 00:19:59.133 Relative Read Throughput: 0 00:19:59.133 Relative Read Latency: 0 00:19:59.133 Relative Write Throughput: 0 00:19:59.133 Relative Write Latency: 0 00:19:59.133 Idle Power: Not Reported 00:19:59.133 Active Power: Not Reported 00:19:59.133 Non-Operational Permissive Mode: Not Supported 00:19:59.133 00:19:59.133 Health Information 00:19:59.133 ================== 00:19:59.133 Critical Warnings: 00:19:59.133 Available Spare Space: OK 00:19:59.133 Temperature: OK 00:19:59.133 Device Reliability: OK 00:19:59.133 Read Only: No 00:19:59.133 Volatile Memory Backup: OK 00:19:59.133 Current Temperature: 0 Kelvin (-273 Celsius) 00:19:59.133 Temperature Threshold: [2024-11-16 16:39:36.605080] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:59.133 [2024-11-16 16:39:36.605112] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:59.133 [2024-11-16 16:39:36.605116] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1060510) 00:19:59.133 [2024-11-16 16:39:36.605123] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.133 [2024-11-16 16:39:36.605148] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10ad240, cid 7, qid 0 00:19:59.133 [2024-11-16 16:39:36.605325] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:59.133 [2024-11-16 16:39:36.605333] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:59.133 [2024-11-16 16:39:36.605336] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:59.133 [2024-11-16 16:39:36.605340] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10ad240) on tqpair=0x1060510 00:19:59.133 [2024-11-16 16:39:36.605385] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:19:59.133 [2024-11-16 16:39:36.605398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.133 [2024-11-16 16:39:36.605404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.133 [2024-11-16 16:39:36.605410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.133 [2024-11-16 16:39:36.605416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.133 [2024-11-16 16:39:36.605424] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:59.133 [2024-11-16 16:39:36.605428] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:59.133 [2024-11-16 16:39:36.605431] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1060510) 00:19:59.133 [2024-11-16 16:39:36.605438] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.133 [2024-11-16 16:39:36.605462] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10accc0, cid 3, qid 0 00:19:59.133 [2024-11-16 16:39:36.605874] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:59.133 [2024-11-16 16:39:36.605888] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:59.133 [2024-11-16 16:39:36.605909] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:59.133 [2024-11-16 16:39:36.605913] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10accc0) on tqpair=0x1060510 00:19:59.133 [2024-11-16 16:39:36.605921] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:59.133 [2024-11-16 16:39:36.605925] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:59.133 [2024-11-16 16:39:36.605928] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1060510) 00:19:59.133 [2024-11-16 16:39:36.605935] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.133 [2024-11-16 16:39:36.605958] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10accc0, cid 3, qid 0 00:19:59.133 [2024-11-16 16:39:36.606045] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:59.133 [2024-11-16 16:39:36.606067] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:59.133 [2024-11-16 16:39:36.606070] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:59.133 [2024-11-16 16:39:36.606074] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10accc0) on tqpair=0x1060510 00:19:59.133 [2024-11-16 16:39:36.606079] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:19:59.133 [2024-11-16 16:39:36.606083] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:19:59.133 [2024-11-16 16:39:36.610135] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:59.133 [2024-11-16 16:39:36.610141] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:59.133 [2024-11-16 16:39:36.610145] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1060510) 00:19:59.133 [2024-11-16 16:39:36.610152] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.133 [2024-11-16 16:39:36.610177] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10accc0, cid 3, qid 0 00:19:59.133 [2024-11-16 16:39:36.610254] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:59.133 [2024-11-16 16:39:36.610261] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:59.133 [2024-11-16 16:39:36.610264] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:59.133 [2024-11-16 16:39:36.610268] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10accc0) on tqpair=0x1060510 00:19:59.133 [2024-11-16 16:39:36.610291] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 0 milliseconds 00:19:59.392 0 Kelvin (-273 Celsius) 00:19:59.392 Available Spare: 0% 00:19:59.392 Available Spare Threshold: 0% 00:19:59.392 Life Percentage Used: 0% 00:19:59.392 Data Units Read: 0 00:19:59.392 Data Units Written: 0 00:19:59.392 Host Read Commands: 0 00:19:59.392 Host Write Commands: 0 00:19:59.392 Controller Busy Time: 0 minutes 00:19:59.392 Power Cycles: 0 00:19:59.392 Power On Hours: 0 hours 00:19:59.392 Unsafe Shutdowns: 0 00:19:59.392 Unrecoverable Media Errors: 0 00:19:59.392 Lifetime Error Log Entries: 0 00:19:59.392 Warning Temperature Time: 0 minutes 00:19:59.392 Critical Temperature Time: 0 minutes 00:19:59.392 00:19:59.392 Number of Queues 00:19:59.392 ================ 00:19:59.392 Number of I/O Submission Queues: 127 00:19:59.392 Number of I/O Completion Queues: 127 00:19:59.392 00:19:59.392 Active Namespaces 00:19:59.392 ================= 00:19:59.392 Namespace ID:1 00:19:59.392 Error Recovery Timeout: Unlimited 00:19:59.392 Command Set Identifier: NVM (00h) 00:19:59.392 Deallocate: Supported 00:19:59.392 Deallocated/Unwritten Error: Not Supported 00:19:59.392 Deallocated Read Value: Unknown 00:19:59.392 Deallocate in Write Zeroes: Not Supported 00:19:59.392 Deallocated Guard Field: 0xFFFF 00:19:59.392 Flush: Supported 00:19:59.392 Reservation: Supported 00:19:59.392 Namespace Sharing Capabilities: Multiple Controllers 00:19:59.392 Size (in LBAs): 131072 (0GiB) 00:19:59.392 Capacity (in LBAs): 131072 (0GiB) 00:19:59.392 Utilization (in LBAs): 131072 (0GiB) 00:19:59.392 NGUID: ABCDEF0123456789ABCDEF0123456789 00:19:59.392 EUI64: ABCDEF0123456789 00:19:59.392 UUID: 7b0210c9-6c3c-48a8-b51a-fc5a0dc17a29 00:19:59.392 Thin Provisioning: Not Supported 00:19:59.392 Per-NS Atomic Units: Yes 00:19:59.392 Atomic Boundary Size (Normal): 0 00:19:59.392 Atomic Boundary Size (PFail): 0 00:19:59.392 Atomic Boundary Offset: 0 00:19:59.392 Maximum Single Source Range Length: 65535 00:19:59.392 Maximum Copy Length: 65535 00:19:59.392 Maximum Source Range Count: 1 00:19:59.392 NGUID/EUI64 Never Reused: No 00:19:59.392 Namespace Write Protected: No 00:19:59.392 Number of LBA Formats: 1 00:19:59.392 Current LBA Format: LBA Format #00 00:19:59.392 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:59.392 00:19:59.392 16:39:36 -- host/identify.sh@51 -- # sync 00:19:59.392 16:39:36 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:59.392 16:39:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.392 16:39:36 -- common/autotest_common.sh@10 -- # set +x 00:19:59.392 16:39:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.392 16:39:36 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:19:59.392 16:39:36 -- host/identify.sh@56 -- # nvmftestfini 00:19:59.392 16:39:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:59.392 16:39:36 -- nvmf/common.sh@116 -- # sync 00:19:59.392 16:39:36 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:59.392 16:39:36 -- nvmf/common.sh@119 -- # set +e 00:19:59.392 16:39:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:59.392 16:39:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:59.392 rmmod nvme_tcp 00:19:59.392 rmmod nvme_fabrics 00:19:59.392 rmmod nvme_keyring 00:19:59.392 16:39:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:59.392 16:39:36 -- nvmf/common.sh@123 -- # set -e 00:19:59.392 16:39:36 -- nvmf/common.sh@124 -- # return 0 00:19:59.392 16:39:36 -- nvmf/common.sh@477 -- # '[' -n 93676 ']' 00:19:59.392 16:39:36 -- nvmf/common.sh@478 -- # killprocess 93676 00:19:59.392 16:39:36 -- common/autotest_common.sh@936 -- # '[' -z 93676 ']' 00:19:59.392 16:39:36 -- common/autotest_common.sh@940 -- # kill -0 93676 00:19:59.392 16:39:36 -- common/autotest_common.sh@941 -- # uname 00:19:59.392 16:39:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:59.392 16:39:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 93676 00:19:59.392 16:39:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:59.392 16:39:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:59.392 killing process with pid 93676 00:19:59.392 16:39:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 93676' 00:19:59.392 16:39:36 -- common/autotest_common.sh@955 -- # kill 93676 00:19:59.392 [2024-11-16 16:39:36.811429] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:19:59.392 16:39:36 -- common/autotest_common.sh@960 -- # wait 93676 00:19:59.651 16:39:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:59.651 16:39:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:59.651 16:39:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:59.651 16:39:37 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:59.651 16:39:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:59.651 16:39:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:59.651 16:39:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:59.651 16:39:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:59.651 16:39:37 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:59.651 00:19:59.651 real 0m2.805s 00:19:59.651 user 0m7.904s 00:19:59.651 sys 0m0.743s 00:19:59.651 16:39:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:59.651 ************************************ 00:19:59.651 END TEST nvmf_identify 00:19:59.651 ************************************ 00:19:59.651 16:39:37 -- common/autotest_common.sh@10 -- # set +x 00:19:59.910 16:39:37 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:19:59.910 16:39:37 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:59.910 16:39:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:59.910 16:39:37 -- common/autotest_common.sh@10 -- # set +x 00:19:59.910 ************************************ 00:19:59.910 START TEST nvmf_perf 00:19:59.910 ************************************ 00:19:59.910 16:39:37 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:19:59.910 * Looking for test storage... 00:19:59.910 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:59.910 16:39:37 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:59.910 16:39:37 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:59.910 16:39:37 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:59.910 16:39:37 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:59.910 16:39:37 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:59.910 16:39:37 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:59.910 16:39:37 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:59.910 16:39:37 -- scripts/common.sh@335 -- # IFS=.-: 00:19:59.910 16:39:37 -- scripts/common.sh@335 -- # read -ra ver1 00:19:59.910 16:39:37 -- scripts/common.sh@336 -- # IFS=.-: 00:19:59.910 16:39:37 -- scripts/common.sh@336 -- # read -ra ver2 00:19:59.910 16:39:37 -- scripts/common.sh@337 -- # local 'op=<' 00:19:59.910 16:39:37 -- scripts/common.sh@339 -- # ver1_l=2 00:19:59.910 16:39:37 -- scripts/common.sh@340 -- # ver2_l=1 00:19:59.910 16:39:37 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:59.910 16:39:37 -- scripts/common.sh@343 -- # case "$op" in 00:19:59.910 16:39:37 -- scripts/common.sh@344 -- # : 1 00:19:59.910 16:39:37 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:59.910 16:39:37 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:59.910 16:39:37 -- scripts/common.sh@364 -- # decimal 1 00:19:59.910 16:39:37 -- scripts/common.sh@352 -- # local d=1 00:19:59.910 16:39:37 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:59.910 16:39:37 -- scripts/common.sh@354 -- # echo 1 00:19:59.910 16:39:37 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:59.910 16:39:37 -- scripts/common.sh@365 -- # decimal 2 00:19:59.910 16:39:37 -- scripts/common.sh@352 -- # local d=2 00:19:59.910 16:39:37 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:59.910 16:39:37 -- scripts/common.sh@354 -- # echo 2 00:19:59.910 16:39:37 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:59.911 16:39:37 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:59.911 16:39:37 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:59.911 16:39:37 -- scripts/common.sh@367 -- # return 0 00:19:59.911 16:39:37 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:59.911 16:39:37 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:59.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:59.911 --rc genhtml_branch_coverage=1 00:19:59.911 --rc genhtml_function_coverage=1 00:19:59.911 --rc genhtml_legend=1 00:19:59.911 --rc geninfo_all_blocks=1 00:19:59.911 --rc geninfo_unexecuted_blocks=1 00:19:59.911 00:19:59.911 ' 00:19:59.911 16:39:37 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:59.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:59.911 --rc genhtml_branch_coverage=1 00:19:59.911 --rc genhtml_function_coverage=1 00:19:59.911 --rc genhtml_legend=1 00:19:59.911 --rc geninfo_all_blocks=1 00:19:59.911 --rc geninfo_unexecuted_blocks=1 00:19:59.911 00:19:59.911 ' 00:19:59.911 16:39:37 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:59.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:59.911 --rc genhtml_branch_coverage=1 00:19:59.911 --rc genhtml_function_coverage=1 00:19:59.911 --rc genhtml_legend=1 00:19:59.911 --rc geninfo_all_blocks=1 00:19:59.911 --rc geninfo_unexecuted_blocks=1 00:19:59.911 00:19:59.911 ' 00:19:59.911 16:39:37 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:59.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:59.911 --rc genhtml_branch_coverage=1 00:19:59.911 --rc genhtml_function_coverage=1 00:19:59.911 --rc genhtml_legend=1 00:19:59.911 --rc geninfo_all_blocks=1 00:19:59.911 --rc geninfo_unexecuted_blocks=1 00:19:59.911 00:19:59.911 ' 00:19:59.911 16:39:37 -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:59.911 16:39:37 -- nvmf/common.sh@7 -- # uname -s 00:19:59.911 16:39:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:59.911 16:39:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:59.911 16:39:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:59.911 16:39:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:59.911 16:39:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:59.911 16:39:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:59.911 16:39:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:59.911 16:39:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:59.911 16:39:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:59.911 16:39:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:59.911 16:39:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:19:59.911 16:39:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:19:59.911 16:39:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:59.911 16:39:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:59.911 16:39:37 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:59.911 16:39:37 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:59.911 16:39:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:59.911 16:39:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:59.911 16:39:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:59.911 16:39:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.911 16:39:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.911 16:39:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.911 16:39:37 -- paths/export.sh@5 -- # export PATH 00:19:59.911 16:39:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.911 16:39:37 -- nvmf/common.sh@46 -- # : 0 00:19:59.911 16:39:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:59.911 16:39:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:59.911 16:39:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:59.911 16:39:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:59.911 16:39:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:59.911 16:39:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:59.911 16:39:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:59.911 16:39:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:59.911 16:39:37 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:59.911 16:39:37 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:59.911 16:39:37 -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:59.911 16:39:37 -- host/perf.sh@17 -- # nvmftestinit 00:19:59.911 16:39:37 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:59.911 16:39:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:59.911 16:39:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:59.911 16:39:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:59.911 16:39:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:59.911 16:39:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:59.911 16:39:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:59.911 16:39:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:59.911 16:39:37 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:59.911 16:39:37 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:59.911 16:39:37 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:59.911 16:39:37 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:59.911 16:39:37 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:59.911 16:39:37 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:59.911 16:39:37 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:59.911 16:39:37 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:59.911 16:39:37 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:59.911 16:39:37 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:59.911 16:39:37 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:59.911 16:39:37 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:59.911 16:39:37 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:59.911 16:39:37 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:59.911 16:39:37 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:59.911 16:39:37 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:59.911 16:39:37 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:59.911 16:39:37 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:59.911 16:39:37 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:00.169 16:39:37 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:00.169 Cannot find device "nvmf_tgt_br" 00:20:00.169 16:39:37 -- nvmf/common.sh@154 -- # true 00:20:00.169 16:39:37 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:00.169 Cannot find device "nvmf_tgt_br2" 00:20:00.169 16:39:37 -- nvmf/common.sh@155 -- # true 00:20:00.169 16:39:37 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:00.169 16:39:37 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:00.169 Cannot find device "nvmf_tgt_br" 00:20:00.169 16:39:37 -- nvmf/common.sh@157 -- # true 00:20:00.169 16:39:37 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:00.169 Cannot find device "nvmf_tgt_br2" 00:20:00.169 16:39:37 -- nvmf/common.sh@158 -- # true 00:20:00.169 16:39:37 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:00.169 16:39:37 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:00.169 16:39:37 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:00.169 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:00.169 16:39:37 -- nvmf/common.sh@161 -- # true 00:20:00.169 16:39:37 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:00.169 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:00.169 16:39:37 -- nvmf/common.sh@162 -- # true 00:20:00.169 16:39:37 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:00.169 16:39:37 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:00.169 16:39:37 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:00.169 16:39:37 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:00.169 16:39:37 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:00.169 16:39:37 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:00.169 16:39:37 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:00.169 16:39:37 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:00.169 16:39:37 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:00.169 16:39:37 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:00.169 16:39:37 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:00.169 16:39:37 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:00.169 16:39:37 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:00.169 16:39:37 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:00.169 16:39:37 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:00.169 16:39:37 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:00.169 16:39:37 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:00.169 16:39:37 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:00.169 16:39:37 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:00.169 16:39:37 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:00.428 16:39:37 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:00.428 16:39:37 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:00.428 16:39:37 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:00.428 16:39:37 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:00.428 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:00.428 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:20:00.428 00:20:00.428 --- 10.0.0.2 ping statistics --- 00:20:00.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:00.428 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:20:00.428 16:39:37 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:00.428 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:00.428 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:20:00.428 00:20:00.428 --- 10.0.0.3 ping statistics --- 00:20:00.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:00.428 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:20:00.428 16:39:37 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:00.428 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:00.428 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:20:00.428 00:20:00.428 --- 10.0.0.1 ping statistics --- 00:20:00.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:00.428 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:20:00.428 16:39:37 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:00.428 16:39:37 -- nvmf/common.sh@421 -- # return 0 00:20:00.428 16:39:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:00.428 16:39:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:00.428 16:39:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:00.428 16:39:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:00.428 16:39:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:00.428 16:39:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:00.428 16:39:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:00.428 16:39:37 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:20:00.428 16:39:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:00.428 16:39:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:00.428 16:39:37 -- common/autotest_common.sh@10 -- # set +x 00:20:00.428 16:39:37 -- nvmf/common.sh@469 -- # nvmfpid=93907 00:20:00.428 16:39:37 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:00.428 16:39:37 -- nvmf/common.sh@470 -- # waitforlisten 93907 00:20:00.428 16:39:37 -- common/autotest_common.sh@829 -- # '[' -z 93907 ']' 00:20:00.428 16:39:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:00.428 16:39:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:00.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:00.428 16:39:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:00.428 16:39:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:00.428 16:39:37 -- common/autotest_common.sh@10 -- # set +x 00:20:00.428 [2024-11-16 16:39:37.784245] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:00.428 [2024-11-16 16:39:37.784334] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:00.687 [2024-11-16 16:39:37.920607] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:00.687 [2024-11-16 16:39:37.996827] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:00.687 [2024-11-16 16:39:37.997546] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:00.687 [2024-11-16 16:39:37.997813] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:00.687 [2024-11-16 16:39:37.998065] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:00.687 [2024-11-16 16:39:37.998502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:00.687 [2024-11-16 16:39:37.998739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:00.687 [2024-11-16 16:39:37.998752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:00.687 [2024-11-16 16:39:37.998583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:01.622 16:39:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:01.622 16:39:38 -- common/autotest_common.sh@862 -- # return 0 00:20:01.622 16:39:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:01.622 16:39:38 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:01.622 16:39:38 -- common/autotest_common.sh@10 -- # set +x 00:20:01.622 16:39:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:01.622 16:39:38 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:01.622 16:39:38 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:20:01.881 16:39:39 -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:20:01.881 16:39:39 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:20:02.139 16:39:39 -- host/perf.sh@30 -- # local_nvme_trid=0000:00:06.0 00:20:02.139 16:39:39 -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:02.397 16:39:39 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:20:02.398 16:39:39 -- host/perf.sh@33 -- # '[' -n 0000:00:06.0 ']' 00:20:02.398 16:39:39 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:20:02.398 16:39:39 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:20:02.398 16:39:39 -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:02.656 [2024-11-16 16:39:40.009289] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:02.656 16:39:40 -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:02.915 16:39:40 -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:02.915 16:39:40 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:03.173 16:39:40 -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:03.173 16:39:40 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:20:03.432 16:39:40 -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:03.432 [2024-11-16 16:39:40.859860] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:03.432 16:39:40 -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:03.690 16:39:41 -- host/perf.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:20:03.690 16:39:41 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:20:03.690 16:39:41 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:20:03.690 16:39:41 -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:20:05.066 Initializing NVMe Controllers 00:20:05.066 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:20:05.066 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:20:05.066 Initialization complete. Launching workers. 00:20:05.066 ======================================================== 00:20:05.066 Latency(us) 00:20:05.066 Device Information : IOPS MiB/s Average min max 00:20:05.066 PCIE (0000:00:06.0) NSID 1 from core 0: 23694.41 92.56 1350.97 393.74 7745.47 00:20:05.066 ======================================================== 00:20:05.066 Total : 23694.41 92.56 1350.97 393.74 7745.47 00:20:05.066 00:20:05.066 16:39:42 -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:06.442 Initializing NVMe Controllers 00:20:06.442 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:06.442 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:06.442 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:06.442 Initialization complete. Launching workers. 00:20:06.442 ======================================================== 00:20:06.442 Latency(us) 00:20:06.442 Device Information : IOPS MiB/s Average min max 00:20:06.442 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3481.94 13.60 286.95 105.21 7212.52 00:20:06.442 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.00 0.48 8128.55 5001.01 12033.51 00:20:06.442 ======================================================== 00:20:06.442 Total : 3605.94 14.09 556.60 105.21 12033.51 00:20:06.442 00:20:06.442 16:39:43 -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:07.378 [2024-11-16 16:39:44.840088] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa90f0 is same with the state(5) to be set 00:20:07.378 [2024-11-16 16:39:44.844069] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa90f0 is same with the state(5) to be set 00:20:07.378 [2024-11-16 16:39:44.844170] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa90f0 is same with the state(5) to be set 00:20:07.378 [2024-11-16 16:39:44.844238] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa90f0 is same with the state(5) to be set 00:20:07.378 [2024-11-16 16:39:44.844297] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa90f0 is same with the state(5) to be set 00:20:07.378 [2024-11-16 16:39:44.844361] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa90f0 is same with the state(5) to be set 00:20:07.378 [2024-11-16 16:39:44.844424] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa90f0 is same with the state(5) to be set 00:20:07.378 [2024-11-16 16:39:44.844470] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa90f0 is same with the state(5) to be set 00:20:07.378 [2024-11-16 16:39:44.844521] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa90f0 is same with the state(5) to be set 00:20:07.637 Initializing NVMe Controllers 00:20:07.637 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:07.637 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:07.637 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:07.637 Initialization complete. Launching workers. 00:20:07.637 ======================================================== 00:20:07.637 Latency(us) 00:20:07.637 Device Information : IOPS MiB/s Average min max 00:20:07.637 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10162.21 39.70 3148.43 562.40 8316.72 00:20:07.637 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2652.27 10.36 12178.72 5590.30 20172.84 00:20:07.637 ======================================================== 00:20:07.637 Total : 12814.48 50.06 5017.47 562.40 20172.84 00:20:07.637 00:20:07.637 16:39:44 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:20:07.637 16:39:44 -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:10.171 Initializing NVMe Controllers 00:20:10.171 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:10.171 Controller IO queue size 128, less than required. 00:20:10.171 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:10.171 Controller IO queue size 128, less than required. 00:20:10.171 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:10.171 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:10.171 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:10.171 Initialization complete. Launching workers. 00:20:10.171 ======================================================== 00:20:10.171 Latency(us) 00:20:10.171 Device Information : IOPS MiB/s Average min max 00:20:10.171 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1845.39 461.35 70288.05 46696.02 122000.74 00:20:10.171 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 584.47 146.12 226795.45 100868.20 378763.42 00:20:10.171 ======================================================== 00:20:10.171 Total : 2429.86 607.47 107933.55 46696.02 378763.42 00:20:10.171 00:20:10.171 16:39:47 -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:20:10.429 No valid NVMe controllers or AIO or URING devices found 00:20:10.429 Initializing NVMe Controllers 00:20:10.429 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:10.429 Controller IO queue size 128, less than required. 00:20:10.429 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:10.429 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:20:10.429 Controller IO queue size 128, less than required. 00:20:10.429 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:10.429 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:20:10.429 WARNING: Some requested NVMe devices were skipped 00:20:10.429 16:39:47 -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:20:12.966 Initializing NVMe Controllers 00:20:12.966 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:12.966 Controller IO queue size 128, less than required. 00:20:12.966 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:12.966 Controller IO queue size 128, less than required. 00:20:12.966 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:12.966 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:12.966 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:12.966 Initialization complete. Launching workers. 00:20:12.966 00:20:12.966 ==================== 00:20:12.966 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:20:12.966 TCP transport: 00:20:12.966 polls: 8242 00:20:12.966 idle_polls: 5785 00:20:12.966 sock_completions: 2457 00:20:12.966 nvme_completions: 4554 00:20:12.966 submitted_requests: 7028 00:20:12.966 queued_requests: 1 00:20:12.966 00:20:12.966 ==================== 00:20:12.966 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:20:12.966 TCP transport: 00:20:12.966 polls: 11399 00:20:12.966 idle_polls: 8856 00:20:12.966 sock_completions: 2543 00:20:12.966 nvme_completions: 4937 00:20:12.966 submitted_requests: 7481 00:20:12.966 queued_requests: 1 00:20:12.966 ======================================================== 00:20:12.966 Latency(us) 00:20:12.966 Device Information : IOPS MiB/s Average min max 00:20:12.966 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1201.95 300.49 108976.06 50694.58 193951.65 00:20:12.966 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1297.45 324.36 100597.26 51558.94 145506.58 00:20:12.966 ======================================================== 00:20:12.966 Total : 2499.40 624.85 104626.59 50694.58 193951.65 00:20:12.966 00:20:12.966 16:39:50 -- host/perf.sh@66 -- # sync 00:20:12.966 16:39:50 -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:13.225 16:39:50 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:20:13.225 16:39:50 -- host/perf.sh@71 -- # '[' -n 0000:00:06.0 ']' 00:20:13.225 16:39:50 -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:20:13.484 16:39:50 -- host/perf.sh@72 -- # ls_guid=5f773da6-e720-40bd-92f9-cd466fe15c1c 00:20:13.484 16:39:50 -- host/perf.sh@73 -- # get_lvs_free_mb 5f773da6-e720-40bd-92f9-cd466fe15c1c 00:20:13.484 16:39:50 -- common/autotest_common.sh@1353 -- # local lvs_uuid=5f773da6-e720-40bd-92f9-cd466fe15c1c 00:20:13.484 16:39:50 -- common/autotest_common.sh@1354 -- # local lvs_info 00:20:13.484 16:39:50 -- common/autotest_common.sh@1355 -- # local fc 00:20:13.484 16:39:50 -- common/autotest_common.sh@1356 -- # local cs 00:20:13.484 16:39:50 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:13.742 16:39:51 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:20:13.742 { 00:20:13.742 "base_bdev": "Nvme0n1", 00:20:13.742 "block_size": 4096, 00:20:13.742 "cluster_size": 4194304, 00:20:13.742 "free_clusters": 1278, 00:20:13.743 "name": "lvs_0", 00:20:13.743 "total_data_clusters": 1278, 00:20:13.743 "uuid": "5f773da6-e720-40bd-92f9-cd466fe15c1c" 00:20:13.743 } 00:20:13.743 ]' 00:20:13.743 16:39:51 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="5f773da6-e720-40bd-92f9-cd466fe15c1c") .free_clusters' 00:20:14.002 16:39:51 -- common/autotest_common.sh@1358 -- # fc=1278 00:20:14.002 16:39:51 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="5f773da6-e720-40bd-92f9-cd466fe15c1c") .cluster_size' 00:20:14.002 5112 00:20:14.002 16:39:51 -- common/autotest_common.sh@1359 -- # cs=4194304 00:20:14.002 16:39:51 -- common/autotest_common.sh@1362 -- # free_mb=5112 00:20:14.002 16:39:51 -- common/autotest_common.sh@1363 -- # echo 5112 00:20:14.002 16:39:51 -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:20:14.002 16:39:51 -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5f773da6-e720-40bd-92f9-cd466fe15c1c lbd_0 5112 00:20:14.260 16:39:51 -- host/perf.sh@80 -- # lb_guid=ce686929-b70e-4d57-8138-cfc4ad4eb3e8 00:20:14.260 16:39:51 -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore ce686929-b70e-4d57-8138-cfc4ad4eb3e8 lvs_n_0 00:20:14.517 16:39:51 -- host/perf.sh@83 -- # ls_nested_guid=c36dac1a-adc0-4065-afee-addfa9d99133 00:20:14.517 16:39:51 -- host/perf.sh@84 -- # get_lvs_free_mb c36dac1a-adc0-4065-afee-addfa9d99133 00:20:14.517 16:39:51 -- common/autotest_common.sh@1353 -- # local lvs_uuid=c36dac1a-adc0-4065-afee-addfa9d99133 00:20:14.517 16:39:51 -- common/autotest_common.sh@1354 -- # local lvs_info 00:20:14.517 16:39:51 -- common/autotest_common.sh@1355 -- # local fc 00:20:14.517 16:39:51 -- common/autotest_common.sh@1356 -- # local cs 00:20:14.517 16:39:51 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:14.776 16:39:52 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:20:14.776 { 00:20:14.776 "base_bdev": "Nvme0n1", 00:20:14.776 "block_size": 4096, 00:20:14.776 "cluster_size": 4194304, 00:20:14.776 "free_clusters": 0, 00:20:14.776 "name": "lvs_0", 00:20:14.776 "total_data_clusters": 1278, 00:20:14.776 "uuid": "5f773da6-e720-40bd-92f9-cd466fe15c1c" 00:20:14.776 }, 00:20:14.776 { 00:20:14.776 "base_bdev": "ce686929-b70e-4d57-8138-cfc4ad4eb3e8", 00:20:14.776 "block_size": 4096, 00:20:14.776 "cluster_size": 4194304, 00:20:14.776 "free_clusters": 1276, 00:20:14.776 "name": "lvs_n_0", 00:20:14.776 "total_data_clusters": 1276, 00:20:14.776 "uuid": "c36dac1a-adc0-4065-afee-addfa9d99133" 00:20:14.776 } 00:20:14.776 ]' 00:20:14.776 16:39:52 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="c36dac1a-adc0-4065-afee-addfa9d99133") .free_clusters' 00:20:14.776 16:39:52 -- common/autotest_common.sh@1358 -- # fc=1276 00:20:14.776 16:39:52 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="c36dac1a-adc0-4065-afee-addfa9d99133") .cluster_size' 00:20:14.776 5104 00:20:14.776 16:39:52 -- common/autotest_common.sh@1359 -- # cs=4194304 00:20:14.776 16:39:52 -- common/autotest_common.sh@1362 -- # free_mb=5104 00:20:14.776 16:39:52 -- common/autotest_common.sh@1363 -- # echo 5104 00:20:14.776 16:39:52 -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:20:14.776 16:39:52 -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u c36dac1a-adc0-4065-afee-addfa9d99133 lbd_nest_0 5104 00:20:15.034 16:39:52 -- host/perf.sh@88 -- # lb_nested_guid=08c2b020-a1eb-4aa0-9ae1-79b921f0e336 00:20:15.034 16:39:52 -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:15.293 16:39:52 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:20:15.293 16:39:52 -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 08c2b020-a1eb-4aa0-9ae1-79b921f0e336 00:20:15.551 16:39:52 -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:15.810 16:39:53 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:20:15.810 16:39:53 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:20:15.810 16:39:53 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:20:15.810 16:39:53 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:15.810 16:39:53 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:16.069 No valid NVMe controllers or AIO or URING devices found 00:20:16.069 Initializing NVMe Controllers 00:20:16.069 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:16.069 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:20:16.069 WARNING: Some requested NVMe devices were skipped 00:20:16.069 16:39:53 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:16.069 16:39:53 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:28.276 Initializing NVMe Controllers 00:20:28.276 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:28.276 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:28.276 Initialization complete. Launching workers. 00:20:28.276 ======================================================== 00:20:28.276 Latency(us) 00:20:28.276 Device Information : IOPS MiB/s Average min max 00:20:28.276 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 855.20 106.90 1168.44 386.04 7665.21 00:20:28.276 ======================================================== 00:20:28.276 Total : 855.20 106.90 1168.44 386.04 7665.21 00:20:28.276 00:20:28.276 16:40:03 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:20:28.276 16:40:03 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:28.276 16:40:03 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:28.276 No valid NVMe controllers or AIO or URING devices found 00:20:28.276 Initializing NVMe Controllers 00:20:28.276 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:28.276 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:20:28.276 WARNING: Some requested NVMe devices were skipped 00:20:28.276 16:40:03 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:28.276 16:40:03 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:38.254 Initializing NVMe Controllers 00:20:38.254 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:38.255 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:38.255 Initialization complete. Launching workers. 00:20:38.255 ======================================================== 00:20:38.255 Latency(us) 00:20:38.255 Device Information : IOPS MiB/s Average min max 00:20:38.255 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1075.56 134.44 29808.85 7921.86 257811.01 00:20:38.255 ======================================================== 00:20:38.255 Total : 1075.56 134.44 29808.85 7921.86 257811.01 00:20:38.255 00:20:38.255 16:40:14 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:20:38.255 16:40:14 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:38.255 16:40:14 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:38.255 No valid NVMe controllers or AIO or URING devices found 00:20:38.255 Initializing NVMe Controllers 00:20:38.255 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:38.255 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:20:38.255 WARNING: Some requested NVMe devices were skipped 00:20:38.255 16:40:14 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:38.255 16:40:14 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:48.238 Initializing NVMe Controllers 00:20:48.238 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:48.238 Controller IO queue size 128, less than required. 00:20:48.238 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:48.238 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:48.238 Initialization complete. Launching workers. 00:20:48.238 ======================================================== 00:20:48.238 Latency(us) 00:20:48.238 Device Information : IOPS MiB/s Average min max 00:20:48.238 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3689.15 461.14 34722.40 11817.71 79428.89 00:20:48.238 ======================================================== 00:20:48.238 Total : 3689.15 461.14 34722.40 11817.71 79428.89 00:20:48.238 00:20:48.238 16:40:24 -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:48.238 16:40:25 -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 08c2b020-a1eb-4aa0-9ae1-79b921f0e336 00:20:48.238 16:40:25 -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:20:48.238 16:40:25 -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete ce686929-b70e-4d57-8138-cfc4ad4eb3e8 00:20:48.497 16:40:25 -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:20:48.758 16:40:26 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:20:48.758 16:40:26 -- host/perf.sh@114 -- # nvmftestfini 00:20:48.758 16:40:26 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:48.758 16:40:26 -- nvmf/common.sh@116 -- # sync 00:20:48.758 16:40:26 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:48.758 16:40:26 -- nvmf/common.sh@119 -- # set +e 00:20:48.758 16:40:26 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:48.758 16:40:26 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:48.758 rmmod nvme_tcp 00:20:48.758 rmmod nvme_fabrics 00:20:48.758 rmmod nvme_keyring 00:20:48.758 16:40:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:48.758 16:40:26 -- nvmf/common.sh@123 -- # set -e 00:20:48.758 16:40:26 -- nvmf/common.sh@124 -- # return 0 00:20:48.758 16:40:26 -- nvmf/common.sh@477 -- # '[' -n 93907 ']' 00:20:48.758 16:40:26 -- nvmf/common.sh@478 -- # killprocess 93907 00:20:48.758 16:40:26 -- common/autotest_common.sh@936 -- # '[' -z 93907 ']' 00:20:48.758 16:40:26 -- common/autotest_common.sh@940 -- # kill -0 93907 00:20:48.758 16:40:26 -- common/autotest_common.sh@941 -- # uname 00:20:48.758 16:40:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:48.758 16:40:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 93907 00:20:48.758 killing process with pid 93907 00:20:48.758 16:40:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:48.758 16:40:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:48.758 16:40:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 93907' 00:20:48.758 16:40:26 -- common/autotest_common.sh@955 -- # kill 93907 00:20:48.758 16:40:26 -- common/autotest_common.sh@960 -- # wait 93907 00:20:49.358 16:40:26 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:49.358 16:40:26 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:49.358 16:40:26 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:49.358 16:40:26 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:49.358 16:40:26 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:49.358 16:40:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:49.358 16:40:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:49.358 16:40:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:49.358 16:40:26 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:49.358 00:20:49.358 real 0m49.477s 00:20:49.358 user 3m6.584s 00:20:49.358 sys 0m10.465s 00:20:49.358 16:40:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:49.358 16:40:26 -- common/autotest_common.sh@10 -- # set +x 00:20:49.358 ************************************ 00:20:49.358 END TEST nvmf_perf 00:20:49.358 ************************************ 00:20:49.358 16:40:26 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:49.358 16:40:26 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:49.358 16:40:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:49.358 16:40:26 -- common/autotest_common.sh@10 -- # set +x 00:20:49.358 ************************************ 00:20:49.358 START TEST nvmf_fio_host 00:20:49.358 ************************************ 00:20:49.358 16:40:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:49.358 * Looking for test storage... 00:20:49.358 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:49.358 16:40:26 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:49.358 16:40:26 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:49.358 16:40:26 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:49.358 16:40:26 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:49.358 16:40:26 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:49.358 16:40:26 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:49.358 16:40:26 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:49.358 16:40:26 -- scripts/common.sh@335 -- # IFS=.-: 00:20:49.358 16:40:26 -- scripts/common.sh@335 -- # read -ra ver1 00:20:49.358 16:40:26 -- scripts/common.sh@336 -- # IFS=.-: 00:20:49.358 16:40:26 -- scripts/common.sh@336 -- # read -ra ver2 00:20:49.633 16:40:26 -- scripts/common.sh@337 -- # local 'op=<' 00:20:49.633 16:40:26 -- scripts/common.sh@339 -- # ver1_l=2 00:20:49.633 16:40:26 -- scripts/common.sh@340 -- # ver2_l=1 00:20:49.633 16:40:26 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:49.633 16:40:26 -- scripts/common.sh@343 -- # case "$op" in 00:20:49.633 16:40:26 -- scripts/common.sh@344 -- # : 1 00:20:49.633 16:40:26 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:49.633 16:40:26 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:49.633 16:40:26 -- scripts/common.sh@364 -- # decimal 1 00:20:49.633 16:40:26 -- scripts/common.sh@352 -- # local d=1 00:20:49.633 16:40:26 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:49.633 16:40:26 -- scripts/common.sh@354 -- # echo 1 00:20:49.633 16:40:26 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:49.633 16:40:26 -- scripts/common.sh@365 -- # decimal 2 00:20:49.633 16:40:26 -- scripts/common.sh@352 -- # local d=2 00:20:49.633 16:40:26 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:49.633 16:40:26 -- scripts/common.sh@354 -- # echo 2 00:20:49.633 16:40:26 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:49.633 16:40:26 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:49.633 16:40:26 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:49.633 16:40:26 -- scripts/common.sh@367 -- # return 0 00:20:49.633 16:40:26 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:49.633 16:40:26 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:49.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:49.633 --rc genhtml_branch_coverage=1 00:20:49.633 --rc genhtml_function_coverage=1 00:20:49.633 --rc genhtml_legend=1 00:20:49.633 --rc geninfo_all_blocks=1 00:20:49.633 --rc geninfo_unexecuted_blocks=1 00:20:49.633 00:20:49.633 ' 00:20:49.633 16:40:26 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:49.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:49.633 --rc genhtml_branch_coverage=1 00:20:49.633 --rc genhtml_function_coverage=1 00:20:49.633 --rc genhtml_legend=1 00:20:49.633 --rc geninfo_all_blocks=1 00:20:49.633 --rc geninfo_unexecuted_blocks=1 00:20:49.633 00:20:49.633 ' 00:20:49.634 16:40:26 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:49.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:49.634 --rc genhtml_branch_coverage=1 00:20:49.634 --rc genhtml_function_coverage=1 00:20:49.634 --rc genhtml_legend=1 00:20:49.634 --rc geninfo_all_blocks=1 00:20:49.634 --rc geninfo_unexecuted_blocks=1 00:20:49.634 00:20:49.634 ' 00:20:49.634 16:40:26 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:49.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:49.634 --rc genhtml_branch_coverage=1 00:20:49.634 --rc genhtml_function_coverage=1 00:20:49.634 --rc genhtml_legend=1 00:20:49.634 --rc geninfo_all_blocks=1 00:20:49.634 --rc geninfo_unexecuted_blocks=1 00:20:49.634 00:20:49.634 ' 00:20:49.634 16:40:26 -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:49.634 16:40:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:49.634 16:40:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:49.634 16:40:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:49.634 16:40:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.634 16:40:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.634 16:40:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.634 16:40:26 -- paths/export.sh@5 -- # export PATH 00:20:49.634 16:40:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.634 16:40:26 -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:49.634 16:40:26 -- nvmf/common.sh@7 -- # uname -s 00:20:49.634 16:40:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:49.634 16:40:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:49.634 16:40:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:49.634 16:40:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:49.634 16:40:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:49.634 16:40:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:49.634 16:40:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:49.634 16:40:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:49.634 16:40:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:49.634 16:40:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:49.634 16:40:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:20:49.634 16:40:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:20:49.634 16:40:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:49.634 16:40:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:49.634 16:40:26 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:49.634 16:40:26 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:49.634 16:40:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:49.634 16:40:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:49.634 16:40:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:49.634 16:40:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.634 16:40:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.634 16:40:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.634 16:40:26 -- paths/export.sh@5 -- # export PATH 00:20:49.634 16:40:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.634 16:40:26 -- nvmf/common.sh@46 -- # : 0 00:20:49.634 16:40:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:49.634 16:40:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:49.634 16:40:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:49.634 16:40:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:49.634 16:40:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:49.634 16:40:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:49.634 16:40:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:49.634 16:40:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:49.634 16:40:26 -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:49.634 16:40:26 -- host/fio.sh@14 -- # nvmftestinit 00:20:49.634 16:40:26 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:49.634 16:40:26 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:49.634 16:40:26 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:49.634 16:40:26 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:49.634 16:40:26 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:49.634 16:40:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:49.634 16:40:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:49.634 16:40:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:49.634 16:40:26 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:49.634 16:40:26 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:49.634 16:40:26 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:49.634 16:40:26 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:49.634 16:40:26 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:49.634 16:40:26 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:49.634 16:40:26 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:49.634 16:40:26 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:49.634 16:40:26 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:49.634 16:40:26 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:49.634 16:40:26 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:49.634 16:40:26 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:49.634 16:40:26 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:49.634 16:40:26 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:49.634 16:40:26 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:49.634 16:40:26 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:49.634 16:40:26 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:49.634 16:40:26 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:49.634 16:40:26 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:49.634 16:40:26 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:49.634 Cannot find device "nvmf_tgt_br" 00:20:49.634 16:40:26 -- nvmf/common.sh@154 -- # true 00:20:49.634 16:40:26 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:49.634 Cannot find device "nvmf_tgt_br2" 00:20:49.634 16:40:26 -- nvmf/common.sh@155 -- # true 00:20:49.634 16:40:26 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:49.634 16:40:26 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:49.634 Cannot find device "nvmf_tgt_br" 00:20:49.634 16:40:26 -- nvmf/common.sh@157 -- # true 00:20:49.634 16:40:26 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:49.634 Cannot find device "nvmf_tgt_br2" 00:20:49.634 16:40:26 -- nvmf/common.sh@158 -- # true 00:20:49.634 16:40:26 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:49.634 16:40:27 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:49.634 16:40:27 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:49.634 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:49.634 16:40:27 -- nvmf/common.sh@161 -- # true 00:20:49.634 16:40:27 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:49.634 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:49.634 16:40:27 -- nvmf/common.sh@162 -- # true 00:20:49.634 16:40:27 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:49.634 16:40:27 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:49.634 16:40:27 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:49.634 16:40:27 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:49.634 16:40:27 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:49.635 16:40:27 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:49.635 16:40:27 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:49.635 16:40:27 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:49.635 16:40:27 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:49.635 16:40:27 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:49.635 16:40:27 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:49.635 16:40:27 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:49.903 16:40:27 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:49.903 16:40:27 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:49.903 16:40:27 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:49.903 16:40:27 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:49.903 16:40:27 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:49.903 16:40:27 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:49.903 16:40:27 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:49.903 16:40:27 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:49.903 16:40:27 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:49.903 16:40:27 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:49.903 16:40:27 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:49.903 16:40:27 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:49.903 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:49.903 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:20:49.903 00:20:49.903 --- 10.0.0.2 ping statistics --- 00:20:49.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:49.903 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:20:49.903 16:40:27 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:49.903 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:49.903 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:20:49.903 00:20:49.903 --- 10.0.0.3 ping statistics --- 00:20:49.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:49.903 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:20:49.903 16:40:27 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:49.903 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:49.903 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:20:49.903 00:20:49.903 --- 10.0.0.1 ping statistics --- 00:20:49.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:49.903 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:20:49.903 16:40:27 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:49.903 16:40:27 -- nvmf/common.sh@421 -- # return 0 00:20:49.903 16:40:27 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:49.903 16:40:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:49.903 16:40:27 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:49.903 16:40:27 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:49.903 16:40:27 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:49.903 16:40:27 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:49.903 16:40:27 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:49.903 16:40:27 -- host/fio.sh@16 -- # [[ y != y ]] 00:20:49.903 16:40:27 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:20:49.903 16:40:27 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:49.903 16:40:27 -- common/autotest_common.sh@10 -- # set +x 00:20:49.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:49.903 16:40:27 -- host/fio.sh@24 -- # nvmfpid=94878 00:20:49.903 16:40:27 -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:49.903 16:40:27 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:49.903 16:40:27 -- host/fio.sh@28 -- # waitforlisten 94878 00:20:49.903 16:40:27 -- common/autotest_common.sh@829 -- # '[' -z 94878 ']' 00:20:49.903 16:40:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:49.903 16:40:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:49.903 16:40:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:49.903 16:40:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:49.903 16:40:27 -- common/autotest_common.sh@10 -- # set +x 00:20:49.903 [2024-11-16 16:40:27.306809] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:49.904 [2024-11-16 16:40:27.307076] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:50.162 [2024-11-16 16:40:27.449238] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:50.162 [2024-11-16 16:40:27.527348] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:50.162 [2024-11-16 16:40:27.527769] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:50.162 [2024-11-16 16:40:27.527846] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:50.162 [2024-11-16 16:40:27.528081] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:50.162 [2024-11-16 16:40:27.528314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:50.162 [2024-11-16 16:40:27.528420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:50.162 [2024-11-16 16:40:27.528550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:50.162 [2024-11-16 16:40:27.528560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:51.099 16:40:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:51.099 16:40:28 -- common/autotest_common.sh@862 -- # return 0 00:20:51.099 16:40:28 -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:51.099 [2024-11-16 16:40:28.470990] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:51.099 16:40:28 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:20:51.099 16:40:28 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:51.099 16:40:28 -- common/autotest_common.sh@10 -- # set +x 00:20:51.099 16:40:28 -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:20:51.358 Malloc1 00:20:51.617 16:40:28 -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:51.617 16:40:29 -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:51.878 16:40:29 -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:52.138 [2024-11-16 16:40:29.447818] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:52.139 16:40:29 -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:52.397 16:40:29 -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:20:52.397 16:40:29 -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:52.397 16:40:29 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:52.397 16:40:29 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:20:52.397 16:40:29 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:52.397 16:40:29 -- common/autotest_common.sh@1328 -- # local sanitizers 00:20:52.397 16:40:29 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:52.397 16:40:29 -- common/autotest_common.sh@1330 -- # shift 00:20:52.397 16:40:29 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:20:52.397 16:40:29 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:20:52.397 16:40:29 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:52.397 16:40:29 -- common/autotest_common.sh@1334 -- # grep libasan 00:20:52.398 16:40:29 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:20:52.398 16:40:29 -- common/autotest_common.sh@1334 -- # asan_lib= 00:20:52.398 16:40:29 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:20:52.398 16:40:29 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:20:52.398 16:40:29 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:52.398 16:40:29 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:20:52.398 16:40:29 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:20:52.398 16:40:29 -- common/autotest_common.sh@1334 -- # asan_lib= 00:20:52.398 16:40:29 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:20:52.398 16:40:29 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:20:52.398 16:40:29 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:52.398 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:20:52.398 fio-3.35 00:20:52.398 Starting 1 thread 00:20:54.933 00:20:54.933 test: (groupid=0, jobs=1): err= 0: pid=94999: Sat Nov 16 16:40:32 2024 00:20:54.933 read: IOPS=10.8k, BW=42.4MiB/s (44.4MB/s)(85.0MiB/2006msec) 00:20:54.933 slat (nsec): min=1588, max=354490, avg=1960.12, stdev=2967.76 00:20:54.933 clat (usec): min=2429, max=11052, avg=6249.51, stdev=544.83 00:20:54.933 lat (usec): min=2440, max=11054, avg=6251.47, stdev=544.65 00:20:54.933 clat percentiles (usec): 00:20:54.933 | 1.00th=[ 5145], 5.00th=[ 5473], 10.00th=[ 5604], 20.00th=[ 5866], 00:20:54.933 | 30.00th=[ 5997], 40.00th=[ 6128], 50.00th=[ 6194], 60.00th=[ 6325], 00:20:54.933 | 70.00th=[ 6456], 80.00th=[ 6652], 90.00th=[ 6849], 95.00th=[ 7111], 00:20:54.933 | 99.00th=[ 7570], 99.50th=[ 7963], 99.90th=[10290], 99.95th=[10421], 00:20:54.933 | 99.99th=[10945] 00:20:54.933 bw ( KiB/s): min=41832, max=44064, per=100.00%, avg=43416.00, stdev=1059.65, samples=4 00:20:54.933 iops : min=10458, max=11016, avg=10854.00, stdev=264.91, samples=4 00:20:54.933 write: IOPS=10.8k, BW=42.3MiB/s (44.4MB/s)(84.9MiB/2006msec); 0 zone resets 00:20:54.933 slat (nsec): min=1682, max=137574, avg=2038.90, stdev=1284.73 00:20:54.933 clat (usec): min=1935, max=10741, avg=5471.75, stdev=466.12 00:20:54.933 lat (usec): min=1949, max=10744, avg=5473.79, stdev=465.99 00:20:54.933 clat percentiles (usec): 00:20:54.933 | 1.00th=[ 4490], 5.00th=[ 4817], 10.00th=[ 4948], 20.00th=[ 5145], 00:20:54.933 | 30.00th=[ 5276], 40.00th=[ 5342], 50.00th=[ 5473], 60.00th=[ 5538], 00:20:54.933 | 70.00th=[ 5669], 80.00th=[ 5800], 90.00th=[ 5932], 95.00th=[ 6063], 00:20:54.933 | 99.00th=[ 6456], 99.50th=[ 7177], 99.90th=[ 9372], 99.95th=[10028], 00:20:54.933 | 99.99th=[10290] 00:20:54.933 bw ( KiB/s): min=42264, max=44032, per=99.98%, avg=43318.00, stdev=759.32, samples=4 00:20:54.933 iops : min=10566, max=11008, avg=10829.50, stdev=189.83, samples=4 00:20:54.933 lat (msec) : 2=0.02%, 4=0.17%, 10=99.72%, 20=0.09% 00:20:54.933 cpu : usr=67.68%, sys=23.14%, ctx=47, majf=0, minf=5 00:20:54.933 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:20:54.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:54.933 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:54.933 issued rwts: total=21766,21728,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:54.933 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:54.933 00:20:54.933 Run status group 0 (all jobs): 00:20:54.933 READ: bw=42.4MiB/s (44.4MB/s), 42.4MiB/s-42.4MiB/s (44.4MB/s-44.4MB/s), io=85.0MiB (89.2MB), run=2006-2006msec 00:20:54.933 WRITE: bw=42.3MiB/s (44.4MB/s), 42.3MiB/s-42.3MiB/s (44.4MB/s-44.4MB/s), io=84.9MiB (89.0MB), run=2006-2006msec 00:20:54.933 16:40:32 -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:54.933 16:40:32 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:54.933 16:40:32 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:20:54.933 16:40:32 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:54.933 16:40:32 -- common/autotest_common.sh@1328 -- # local sanitizers 00:20:54.933 16:40:32 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:54.933 16:40:32 -- common/autotest_common.sh@1330 -- # shift 00:20:54.933 16:40:32 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:20:54.933 16:40:32 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:20:54.933 16:40:32 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:54.933 16:40:32 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:20:54.933 16:40:32 -- common/autotest_common.sh@1334 -- # grep libasan 00:20:54.933 16:40:32 -- common/autotest_common.sh@1334 -- # asan_lib= 00:20:54.933 16:40:32 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:20:54.933 16:40:32 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:20:54.933 16:40:32 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:20:54.933 16:40:32 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:54.933 16:40:32 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:20:54.933 16:40:32 -- common/autotest_common.sh@1334 -- # asan_lib= 00:20:54.933 16:40:32 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:20:54.933 16:40:32 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:20:54.933 16:40:32 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:54.933 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:20:54.933 fio-3.35 00:20:54.933 Starting 1 thread 00:20:57.469 00:20:57.469 test: (groupid=0, jobs=1): err= 0: pid=95048: Sat Nov 16 16:40:34 2024 00:20:57.469 read: IOPS=8979, BW=140MiB/s (147MB/s)(281MiB/2006msec) 00:20:57.469 slat (usec): min=2, max=124, avg= 3.41, stdev= 2.47 00:20:57.469 clat (usec): min=1879, max=16061, avg=8524.12, stdev=2190.41 00:20:57.469 lat (usec): min=1882, max=16064, avg=8527.53, stdev=2190.64 00:20:57.469 clat percentiles (usec): 00:20:57.469 | 1.00th=[ 4293], 5.00th=[ 5211], 10.00th=[ 5735], 20.00th=[ 6521], 00:20:57.469 | 30.00th=[ 7177], 40.00th=[ 7832], 50.00th=[ 8455], 60.00th=[ 9110], 00:20:57.469 | 70.00th=[ 9765], 80.00th=[10421], 90.00th=[11338], 95.00th=[12387], 00:20:57.469 | 99.00th=[13960], 99.50th=[14484], 99.90th=[15533], 99.95th=[15664], 00:20:57.469 | 99.99th=[16057] 00:20:57.469 bw ( KiB/s): min=65600, max=74112, per=49.81%, avg=71560.00, stdev=4013.87, samples=4 00:20:57.469 iops : min= 4100, max= 4632, avg=4472.50, stdev=250.87, samples=4 00:20:57.469 write: IOPS=5280, BW=82.5MiB/s (86.5MB/s)(146MiB/1764msec); 0 zone resets 00:20:57.469 slat (usec): min=29, max=350, avg=34.47, stdev= 9.62 00:20:57.469 clat (usec): min=1976, max=18612, avg=10200.81, stdev=1855.90 00:20:57.469 lat (usec): min=2006, max=18642, avg=10235.28, stdev=1857.86 00:20:57.469 clat percentiles (usec): 00:20:57.469 | 1.00th=[ 6915], 5.00th=[ 7701], 10.00th=[ 8094], 20.00th=[ 8586], 00:20:57.469 | 30.00th=[ 9110], 40.00th=[ 9503], 50.00th=[ 9896], 60.00th=[10421], 00:20:57.470 | 70.00th=[10814], 80.00th=[11600], 90.00th=[12911], 95.00th=[13829], 00:20:57.470 | 99.00th=[15270], 99.50th=[15664], 99.90th=[16450], 99.95th=[16712], 00:20:57.470 | 99.99th=[18744] 00:20:57.470 bw ( KiB/s): min=68224, max=77536, per=88.20%, avg=74512.00, stdev=4273.53, samples=4 00:20:57.470 iops : min= 4264, max= 4846, avg=4657.00, stdev=267.10, samples=4 00:20:57.470 lat (msec) : 2=0.01%, 4=0.35%, 10=65.73%, 20=33.91% 00:20:57.470 cpu : usr=71.37%, sys=18.15%, ctx=3, majf=0, minf=1 00:20:57.470 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:20:57.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:57.470 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:57.470 issued rwts: total=18012,9314,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:57.470 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:57.470 00:20:57.470 Run status group 0 (all jobs): 00:20:57.470 READ: bw=140MiB/s (147MB/s), 140MiB/s-140MiB/s (147MB/s-147MB/s), io=281MiB (295MB), run=2006-2006msec 00:20:57.470 WRITE: bw=82.5MiB/s (86.5MB/s), 82.5MiB/s-82.5MiB/s (86.5MB/s-86.5MB/s), io=146MiB (153MB), run=1764-1764msec 00:20:57.470 16:40:34 -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:57.470 16:40:34 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:20:57.470 16:40:34 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:20:57.470 16:40:34 -- host/fio.sh@51 -- # get_nvme_bdfs 00:20:57.470 16:40:34 -- common/autotest_common.sh@1508 -- # bdfs=() 00:20:57.470 16:40:34 -- common/autotest_common.sh@1508 -- # local bdfs 00:20:57.470 16:40:34 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:20:57.470 16:40:34 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:57.470 16:40:34 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:20:57.727 16:40:34 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:20:57.727 16:40:34 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:20:57.727 16:40:34 -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 -i 10.0.0.2 00:20:57.985 Nvme0n1 00:20:57.985 16:40:35 -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:20:58.244 16:40:35 -- host/fio.sh@53 -- # ls_guid=0abda1c4-1c75-43c2-8567-b3af12bc5515 00:20:58.245 16:40:35 -- host/fio.sh@54 -- # get_lvs_free_mb 0abda1c4-1c75-43c2-8567-b3af12bc5515 00:20:58.245 16:40:35 -- common/autotest_common.sh@1353 -- # local lvs_uuid=0abda1c4-1c75-43c2-8567-b3af12bc5515 00:20:58.245 16:40:35 -- common/autotest_common.sh@1354 -- # local lvs_info 00:20:58.245 16:40:35 -- common/autotest_common.sh@1355 -- # local fc 00:20:58.245 16:40:35 -- common/autotest_common.sh@1356 -- # local cs 00:20:58.245 16:40:35 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:58.504 16:40:35 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:20:58.504 { 00:20:58.504 "base_bdev": "Nvme0n1", 00:20:58.504 "block_size": 4096, 00:20:58.504 "cluster_size": 1073741824, 00:20:58.504 "free_clusters": 4, 00:20:58.504 "name": "lvs_0", 00:20:58.504 "total_data_clusters": 4, 00:20:58.504 "uuid": "0abda1c4-1c75-43c2-8567-b3af12bc5515" 00:20:58.504 } 00:20:58.504 ]' 00:20:58.504 16:40:35 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="0abda1c4-1c75-43c2-8567-b3af12bc5515") .free_clusters' 00:20:58.504 16:40:35 -- common/autotest_common.sh@1358 -- # fc=4 00:20:58.504 16:40:35 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="0abda1c4-1c75-43c2-8567-b3af12bc5515") .cluster_size' 00:20:58.504 16:40:35 -- common/autotest_common.sh@1359 -- # cs=1073741824 00:20:58.504 16:40:35 -- common/autotest_common.sh@1362 -- # free_mb=4096 00:20:58.504 4096 00:20:58.504 16:40:35 -- common/autotest_common.sh@1363 -- # echo 4096 00:20:58.504 16:40:35 -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:20:58.762 26e2265a-8957-42b6-9269-7e6f14b32c8b 00:20:58.762 16:40:36 -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:20:59.021 16:40:36 -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:20:59.281 16:40:36 -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:59.540 16:40:36 -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:59.540 16:40:36 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:59.540 16:40:36 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:20:59.540 16:40:36 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:59.540 16:40:36 -- common/autotest_common.sh@1328 -- # local sanitizers 00:20:59.540 16:40:36 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:59.540 16:40:36 -- common/autotest_common.sh@1330 -- # shift 00:20:59.540 16:40:36 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:20:59.540 16:40:36 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:20:59.540 16:40:36 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:59.540 16:40:36 -- common/autotest_common.sh@1334 -- # grep libasan 00:20:59.540 16:40:36 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:20:59.540 16:40:36 -- common/autotest_common.sh@1334 -- # asan_lib= 00:20:59.540 16:40:36 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:20:59.540 16:40:36 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:20:59.540 16:40:36 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:59.540 16:40:36 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:20:59.540 16:40:36 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:20:59.540 16:40:36 -- common/autotest_common.sh@1334 -- # asan_lib= 00:20:59.540 16:40:36 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:20:59.540 16:40:36 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:20:59.540 16:40:36 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:59.540 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:20:59.540 fio-3.35 00:20:59.540 Starting 1 thread 00:21:02.074 00:21:02.074 test: (groupid=0, jobs=1): err= 0: pid=95199: Sat Nov 16 16:40:39 2024 00:21:02.074 read: IOPS=6239, BW=24.4MiB/s (25.6MB/s)(48.9MiB/2008msec) 00:21:02.074 slat (nsec): min=1721, max=417830, avg=2434.67, stdev=5134.52 00:21:02.074 clat (usec): min=4421, max=19274, avg=10893.84, stdev=1043.93 00:21:02.074 lat (usec): min=4431, max=19276, avg=10896.28, stdev=1043.68 00:21:02.074 clat percentiles (usec): 00:21:02.074 | 1.00th=[ 8717], 5.00th=[ 9372], 10.00th=[ 9765], 20.00th=[10028], 00:21:02.074 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10814], 60.00th=[11076], 00:21:02.074 | 70.00th=[11338], 80.00th=[11731], 90.00th=[12125], 95.00th=[12518], 00:21:02.074 | 99.00th=[13566], 99.50th=[14353], 99.90th=[17957], 99.95th=[18220], 00:21:02.074 | 99.99th=[19268] 00:21:02.074 bw ( KiB/s): min=24072, max=25712, per=99.82%, avg=24912.00, stdev=690.32, samples=4 00:21:02.074 iops : min= 6018, max= 6428, avg=6228.00, stdev=172.58, samples=4 00:21:02.074 write: IOPS=6231, BW=24.3MiB/s (25.5MB/s)(48.9MiB/2008msec); 0 zone resets 00:21:02.074 slat (nsec): min=1788, max=250405, avg=2550.98, stdev=3224.49 00:21:02.074 clat (usec): min=2773, max=18146, avg=9562.53, stdev=924.56 00:21:02.074 lat (usec): min=2787, max=18148, avg=9565.08, stdev=924.43 00:21:02.074 clat percentiles (usec): 00:21:02.074 | 1.00th=[ 7570], 5.00th=[ 8225], 10.00th=[ 8455], 20.00th=[ 8848], 00:21:02.074 | 30.00th=[ 9110], 40.00th=[ 9372], 50.00th=[ 9503], 60.00th=[ 9765], 00:21:02.074 | 70.00th=[10028], 80.00th=[10290], 90.00th=[10552], 95.00th=[10945], 00:21:02.074 | 99.00th=[11863], 99.50th=[12649], 99.90th=[16909], 99.95th=[17171], 00:21:02.074 | 99.99th=[18220] 00:21:02.074 bw ( KiB/s): min=24688, max=25096, per=99.97%, avg=24916.00, stdev=193.60, samples=4 00:21:02.074 iops : min= 6172, max= 6274, avg=6229.00, stdev=48.40, samples=4 00:21:02.074 lat (msec) : 4=0.04%, 10=44.14%, 20=55.82% 00:21:02.074 cpu : usr=70.50%, sys=22.47%, ctx=37, majf=0, minf=5 00:21:02.074 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:21:02.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:02.074 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:02.074 issued rwts: total=12529,12512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:02.074 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:02.074 00:21:02.074 Run status group 0 (all jobs): 00:21:02.074 READ: bw=24.4MiB/s (25.6MB/s), 24.4MiB/s-24.4MiB/s (25.6MB/s-25.6MB/s), io=48.9MiB (51.3MB), run=2008-2008msec 00:21:02.074 WRITE: bw=24.3MiB/s (25.5MB/s), 24.3MiB/s-24.3MiB/s (25.5MB/s-25.5MB/s), io=48.9MiB (51.2MB), run=2008-2008msec 00:21:02.074 16:40:39 -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:02.074 16:40:39 -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:21:02.333 16:40:39 -- host/fio.sh@64 -- # ls_nested_guid=e6668ae8-42b7-43fc-9d84-87a49d93dc82 00:21:02.333 16:40:39 -- host/fio.sh@65 -- # get_lvs_free_mb e6668ae8-42b7-43fc-9d84-87a49d93dc82 00:21:02.333 16:40:39 -- common/autotest_common.sh@1353 -- # local lvs_uuid=e6668ae8-42b7-43fc-9d84-87a49d93dc82 00:21:02.333 16:40:39 -- common/autotest_common.sh@1354 -- # local lvs_info 00:21:02.333 16:40:39 -- common/autotest_common.sh@1355 -- # local fc 00:21:02.333 16:40:39 -- common/autotest_common.sh@1356 -- # local cs 00:21:02.333 16:40:39 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:02.592 16:40:40 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:21:02.592 { 00:21:02.592 "base_bdev": "Nvme0n1", 00:21:02.592 "block_size": 4096, 00:21:02.592 "cluster_size": 1073741824, 00:21:02.592 "free_clusters": 0, 00:21:02.592 "name": "lvs_0", 00:21:02.592 "total_data_clusters": 4, 00:21:02.592 "uuid": "0abda1c4-1c75-43c2-8567-b3af12bc5515" 00:21:02.592 }, 00:21:02.592 { 00:21:02.592 "base_bdev": "26e2265a-8957-42b6-9269-7e6f14b32c8b", 00:21:02.592 "block_size": 4096, 00:21:02.592 "cluster_size": 4194304, 00:21:02.592 "free_clusters": 1022, 00:21:02.592 "name": "lvs_n_0", 00:21:02.592 "total_data_clusters": 1022, 00:21:02.592 "uuid": "e6668ae8-42b7-43fc-9d84-87a49d93dc82" 00:21:02.592 } 00:21:02.592 ]' 00:21:02.852 16:40:40 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="e6668ae8-42b7-43fc-9d84-87a49d93dc82") .free_clusters' 00:21:02.852 16:40:40 -- common/autotest_common.sh@1358 -- # fc=1022 00:21:02.852 16:40:40 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="e6668ae8-42b7-43fc-9d84-87a49d93dc82") .cluster_size' 00:21:02.852 16:40:40 -- common/autotest_common.sh@1359 -- # cs=4194304 00:21:02.852 16:40:40 -- common/autotest_common.sh@1362 -- # free_mb=4088 00:21:02.852 4088 00:21:02.852 16:40:40 -- common/autotest_common.sh@1363 -- # echo 4088 00:21:02.852 16:40:40 -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:21:03.110 85e5430e-25da-42c1-905e-cafd34f27918 00:21:03.110 16:40:40 -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:21:03.369 16:40:40 -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:21:03.628 16:40:40 -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:21:03.887 16:40:41 -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:03.887 16:40:41 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:03.887 16:40:41 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:21:03.887 16:40:41 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:03.887 16:40:41 -- common/autotest_common.sh@1328 -- # local sanitizers 00:21:03.887 16:40:41 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:03.887 16:40:41 -- common/autotest_common.sh@1330 -- # shift 00:21:03.887 16:40:41 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:21:03.887 16:40:41 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:03.887 16:40:41 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:03.887 16:40:41 -- common/autotest_common.sh@1334 -- # grep libasan 00:21:03.887 16:40:41 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:03.887 16:40:41 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:03.887 16:40:41 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:03.887 16:40:41 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:03.887 16:40:41 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:03.887 16:40:41 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:21:03.887 16:40:41 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:03.887 16:40:41 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:03.887 16:40:41 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:03.887 16:40:41 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:03.887 16:40:41 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:03.887 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:03.887 fio-3.35 00:21:03.887 Starting 1 thread 00:21:06.419 00:21:06.419 test: (groupid=0, jobs=1): err= 0: pid=95325: Sat Nov 16 16:40:43 2024 00:21:06.419 read: IOPS=5755, BW=22.5MiB/s (23.6MB/s)(45.2MiB/2009msec) 00:21:06.419 slat (nsec): min=1754, max=321874, avg=2799.61, stdev=4451.92 00:21:06.419 clat (usec): min=3410, max=21435, avg=11825.53, stdev=1116.55 00:21:06.419 lat (usec): min=3419, max=21438, avg=11828.33, stdev=1116.34 00:21:06.419 clat percentiles (usec): 00:21:06.419 | 1.00th=[ 9372], 5.00th=[10159], 10.00th=[10552], 20.00th=[10945], 00:21:06.419 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11863], 60.00th=[11994], 00:21:06.419 | 70.00th=[12387], 80.00th=[12649], 90.00th=[13173], 95.00th=[13566], 00:21:06.419 | 99.00th=[14484], 99.50th=[14746], 99.90th=[18220], 99.95th=[18744], 00:21:06.419 | 99.99th=[21365] 00:21:06.419 bw ( KiB/s): min=22056, max=23480, per=99.82%, avg=22980.00, stdev=636.47, samples=4 00:21:06.419 iops : min= 5514, max= 5870, avg=5745.00, stdev=159.12, samples=4 00:21:06.419 write: IOPS=5742, BW=22.4MiB/s (23.5MB/s)(45.1MiB/2009msec); 0 zone resets 00:21:06.419 slat (nsec): min=1808, max=188738, avg=2910.41, stdev=3393.90 00:21:06.419 clat (usec): min=2254, max=18708, avg=10340.15, stdev=960.83 00:21:06.419 lat (usec): min=2267, max=18710, avg=10343.06, stdev=960.69 00:21:06.419 clat percentiles (usec): 00:21:06.419 | 1.00th=[ 8225], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[ 9634], 00:21:06.419 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10290], 60.00th=[10552], 00:21:06.419 | 70.00th=[10814], 80.00th=[11076], 90.00th=[11469], 95.00th=[11731], 00:21:06.419 | 99.00th=[12387], 99.50th=[12780], 99.90th=[17433], 99.95th=[18482], 00:21:06.419 | 99.99th=[18744] 00:21:06.419 bw ( KiB/s): min=22848, max=23040, per=99.97%, avg=22962.00, stdev=81.16, samples=4 00:21:06.419 iops : min= 5712, max= 5760, avg=5740.50, stdev=20.29, samples=4 00:21:06.419 lat (msec) : 4=0.05%, 10=19.26%, 20=80.67%, 50=0.02% 00:21:06.419 cpu : usr=73.06%, sys=20.57%, ctx=4, majf=0, minf=5 00:21:06.419 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:21:06.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.419 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:06.419 issued rwts: total=11562,11536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.419 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.419 00:21:06.419 Run status group 0 (all jobs): 00:21:06.419 READ: bw=22.5MiB/s (23.6MB/s), 22.5MiB/s-22.5MiB/s (23.6MB/s-23.6MB/s), io=45.2MiB (47.4MB), run=2009-2009msec 00:21:06.419 WRITE: bw=22.4MiB/s (23.5MB/s), 22.4MiB/s-22.4MiB/s (23.5MB/s-23.5MB/s), io=45.1MiB (47.3MB), run=2009-2009msec 00:21:06.419 16:40:43 -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:21:06.419 16:40:43 -- host/fio.sh@74 -- # sync 00:21:06.678 16:40:43 -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:21:06.678 16:40:44 -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:21:06.936 16:40:44 -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:21:07.194 16:40:44 -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:21:07.453 16:40:44 -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:21:08.388 16:40:45 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:08.388 16:40:45 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:21:08.388 16:40:45 -- host/fio.sh@86 -- # nvmftestfini 00:21:08.388 16:40:45 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:08.388 16:40:45 -- nvmf/common.sh@116 -- # sync 00:21:08.388 16:40:45 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:08.388 16:40:45 -- nvmf/common.sh@119 -- # set +e 00:21:08.388 16:40:45 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:08.388 16:40:45 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:08.388 rmmod nvme_tcp 00:21:08.388 rmmod nvme_fabrics 00:21:08.647 rmmod nvme_keyring 00:21:08.647 16:40:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:08.647 16:40:45 -- nvmf/common.sh@123 -- # set -e 00:21:08.647 16:40:45 -- nvmf/common.sh@124 -- # return 0 00:21:08.647 16:40:45 -- nvmf/common.sh@477 -- # '[' -n 94878 ']' 00:21:08.647 16:40:45 -- nvmf/common.sh@478 -- # killprocess 94878 00:21:08.647 16:40:45 -- common/autotest_common.sh@936 -- # '[' -z 94878 ']' 00:21:08.647 16:40:45 -- common/autotest_common.sh@940 -- # kill -0 94878 00:21:08.647 16:40:45 -- common/autotest_common.sh@941 -- # uname 00:21:08.647 16:40:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:08.647 16:40:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 94878 00:21:08.647 16:40:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:08.647 killing process with pid 94878 00:21:08.647 16:40:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:08.647 16:40:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 94878' 00:21:08.647 16:40:45 -- common/autotest_common.sh@955 -- # kill 94878 00:21:08.647 16:40:45 -- common/autotest_common.sh@960 -- # wait 94878 00:21:08.906 16:40:46 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:08.906 16:40:46 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:08.906 16:40:46 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:08.906 16:40:46 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:08.906 16:40:46 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:08.906 16:40:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:08.906 16:40:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:08.906 16:40:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:08.906 16:40:46 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:08.906 00:21:08.906 real 0m19.547s 00:21:08.906 user 1m25.517s 00:21:08.906 sys 0m4.311s 00:21:08.906 16:40:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:08.906 16:40:46 -- common/autotest_common.sh@10 -- # set +x 00:21:08.906 ************************************ 00:21:08.906 END TEST nvmf_fio_host 00:21:08.906 ************************************ 00:21:08.906 16:40:46 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:08.906 16:40:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:08.906 16:40:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:08.906 16:40:46 -- common/autotest_common.sh@10 -- # set +x 00:21:08.906 ************************************ 00:21:08.906 START TEST nvmf_failover 00:21:08.906 ************************************ 00:21:08.906 16:40:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:08.906 * Looking for test storage... 00:21:08.906 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:08.906 16:40:46 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:21:08.906 16:40:46 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:21:08.906 16:40:46 -- common/autotest_common.sh@1690 -- # lcov --version 00:21:09.165 16:40:46 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:21:09.165 16:40:46 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:21:09.165 16:40:46 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:09.165 16:40:46 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:09.165 16:40:46 -- scripts/common.sh@335 -- # IFS=.-: 00:21:09.165 16:40:46 -- scripts/common.sh@335 -- # read -ra ver1 00:21:09.165 16:40:46 -- scripts/common.sh@336 -- # IFS=.-: 00:21:09.165 16:40:46 -- scripts/common.sh@336 -- # read -ra ver2 00:21:09.165 16:40:46 -- scripts/common.sh@337 -- # local 'op=<' 00:21:09.165 16:40:46 -- scripts/common.sh@339 -- # ver1_l=2 00:21:09.165 16:40:46 -- scripts/common.sh@340 -- # ver2_l=1 00:21:09.165 16:40:46 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:09.165 16:40:46 -- scripts/common.sh@343 -- # case "$op" in 00:21:09.165 16:40:46 -- scripts/common.sh@344 -- # : 1 00:21:09.165 16:40:46 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:09.165 16:40:46 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:09.165 16:40:46 -- scripts/common.sh@364 -- # decimal 1 00:21:09.165 16:40:46 -- scripts/common.sh@352 -- # local d=1 00:21:09.165 16:40:46 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:09.165 16:40:46 -- scripts/common.sh@354 -- # echo 1 00:21:09.165 16:40:46 -- scripts/common.sh@364 -- # ver1[v]=1 00:21:09.165 16:40:46 -- scripts/common.sh@365 -- # decimal 2 00:21:09.165 16:40:46 -- scripts/common.sh@352 -- # local d=2 00:21:09.165 16:40:46 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:09.165 16:40:46 -- scripts/common.sh@354 -- # echo 2 00:21:09.165 16:40:46 -- scripts/common.sh@365 -- # ver2[v]=2 00:21:09.165 16:40:46 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:09.165 16:40:46 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:09.165 16:40:46 -- scripts/common.sh@367 -- # return 0 00:21:09.165 16:40:46 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:09.165 16:40:46 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:21:09.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:09.165 --rc genhtml_branch_coverage=1 00:21:09.165 --rc genhtml_function_coverage=1 00:21:09.165 --rc genhtml_legend=1 00:21:09.165 --rc geninfo_all_blocks=1 00:21:09.165 --rc geninfo_unexecuted_blocks=1 00:21:09.165 00:21:09.165 ' 00:21:09.165 16:40:46 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:21:09.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:09.165 --rc genhtml_branch_coverage=1 00:21:09.165 --rc genhtml_function_coverage=1 00:21:09.165 --rc genhtml_legend=1 00:21:09.165 --rc geninfo_all_blocks=1 00:21:09.165 --rc geninfo_unexecuted_blocks=1 00:21:09.165 00:21:09.165 ' 00:21:09.165 16:40:46 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:21:09.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:09.165 --rc genhtml_branch_coverage=1 00:21:09.165 --rc genhtml_function_coverage=1 00:21:09.165 --rc genhtml_legend=1 00:21:09.165 --rc geninfo_all_blocks=1 00:21:09.165 --rc geninfo_unexecuted_blocks=1 00:21:09.165 00:21:09.165 ' 00:21:09.165 16:40:46 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:21:09.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:09.165 --rc genhtml_branch_coverage=1 00:21:09.165 --rc genhtml_function_coverage=1 00:21:09.165 --rc genhtml_legend=1 00:21:09.165 --rc geninfo_all_blocks=1 00:21:09.165 --rc geninfo_unexecuted_blocks=1 00:21:09.165 00:21:09.165 ' 00:21:09.165 16:40:46 -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:09.165 16:40:46 -- nvmf/common.sh@7 -- # uname -s 00:21:09.165 16:40:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:09.165 16:40:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:09.165 16:40:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:09.165 16:40:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:09.165 16:40:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:09.165 16:40:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:09.165 16:40:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:09.165 16:40:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:09.165 16:40:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:09.166 16:40:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:09.166 16:40:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:21:09.166 16:40:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:21:09.166 16:40:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:09.166 16:40:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:09.166 16:40:46 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:09.166 16:40:46 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:09.166 16:40:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:09.166 16:40:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:09.166 16:40:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:09.166 16:40:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.166 16:40:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.166 16:40:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.166 16:40:46 -- paths/export.sh@5 -- # export PATH 00:21:09.166 16:40:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.166 16:40:46 -- nvmf/common.sh@46 -- # : 0 00:21:09.166 16:40:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:09.166 16:40:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:09.166 16:40:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:09.166 16:40:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:09.166 16:40:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:09.166 16:40:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:09.166 16:40:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:09.166 16:40:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:09.166 16:40:46 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:09.166 16:40:46 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:09.166 16:40:46 -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:09.166 16:40:46 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:09.166 16:40:46 -- host/failover.sh@18 -- # nvmftestinit 00:21:09.166 16:40:46 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:09.166 16:40:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:09.166 16:40:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:09.166 16:40:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:09.166 16:40:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:09.166 16:40:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:09.166 16:40:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:09.166 16:40:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:09.166 16:40:46 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:09.166 16:40:46 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:09.166 16:40:46 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:09.166 16:40:46 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:09.166 16:40:46 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:09.166 16:40:46 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:09.166 16:40:46 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:09.166 16:40:46 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:09.166 16:40:46 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:09.166 16:40:46 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:09.166 16:40:46 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:09.166 16:40:46 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:09.166 16:40:46 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:09.166 16:40:46 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:09.166 16:40:46 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:09.166 16:40:46 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:09.166 16:40:46 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:09.166 16:40:46 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:09.166 16:40:46 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:09.166 16:40:46 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:09.166 Cannot find device "nvmf_tgt_br" 00:21:09.166 16:40:46 -- nvmf/common.sh@154 -- # true 00:21:09.166 16:40:46 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:09.166 Cannot find device "nvmf_tgt_br2" 00:21:09.166 16:40:46 -- nvmf/common.sh@155 -- # true 00:21:09.166 16:40:46 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:09.166 16:40:46 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:09.166 Cannot find device "nvmf_tgt_br" 00:21:09.166 16:40:46 -- nvmf/common.sh@157 -- # true 00:21:09.166 16:40:46 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:09.166 Cannot find device "nvmf_tgt_br2" 00:21:09.166 16:40:46 -- nvmf/common.sh@158 -- # true 00:21:09.166 16:40:46 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:21:09.166 16:40:46 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:21:09.166 16:40:46 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:09.166 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:09.166 16:40:46 -- nvmf/common.sh@161 -- # true 00:21:09.166 16:40:46 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:09.166 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:09.166 16:40:46 -- nvmf/common.sh@162 -- # true 00:21:09.166 16:40:46 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:21:09.166 16:40:46 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:09.166 16:40:46 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:09.425 16:40:46 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:09.425 16:40:46 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:09.425 16:40:46 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:09.425 16:40:46 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:09.425 16:40:46 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:09.425 16:40:46 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:09.425 16:40:46 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:09.425 16:40:46 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:09.425 16:40:46 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:09.425 16:40:46 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:09.425 16:40:46 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:09.425 16:40:46 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:09.425 16:40:46 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:09.425 16:40:46 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:09.425 16:40:46 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:09.425 16:40:46 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:09.425 16:40:46 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:09.425 16:40:46 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:09.426 16:40:46 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:09.426 16:40:46 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:09.426 16:40:46 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:21:09.426 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:09.426 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:21:09.426 00:21:09.426 --- 10.0.0.2 ping statistics --- 00:21:09.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:09.426 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:21:09.426 16:40:46 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:21:09.426 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:09.426 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:21:09.426 00:21:09.426 --- 10.0.0.3 ping statistics --- 00:21:09.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:09.426 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:21:09.426 16:40:46 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:09.426 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:09.426 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:21:09.426 00:21:09.426 --- 10.0.0.1 ping statistics --- 00:21:09.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:09.426 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:21:09.426 16:40:46 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:09.426 16:40:46 -- nvmf/common.sh@421 -- # return 0 00:21:09.426 16:40:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:09.426 16:40:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:09.426 16:40:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:09.426 16:40:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:09.426 16:40:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:09.426 16:40:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:09.426 16:40:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:09.426 16:40:46 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:21:09.426 16:40:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:09.426 16:40:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:09.426 16:40:46 -- common/autotest_common.sh@10 -- # set +x 00:21:09.426 16:40:46 -- nvmf/common.sh@469 -- # nvmfpid=95605 00:21:09.426 16:40:46 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:09.426 16:40:46 -- nvmf/common.sh@470 -- # waitforlisten 95605 00:21:09.426 16:40:46 -- common/autotest_common.sh@829 -- # '[' -z 95605 ']' 00:21:09.426 16:40:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:09.426 16:40:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:09.426 16:40:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:09.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:09.426 16:40:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:09.426 16:40:46 -- common/autotest_common.sh@10 -- # set +x 00:21:09.426 [2024-11-16 16:40:46.886411] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:09.426 [2024-11-16 16:40:46.886488] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:09.685 [2024-11-16 16:40:47.030234] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:09.685 [2024-11-16 16:40:47.103718] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:09.685 [2024-11-16 16:40:47.103891] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:09.685 [2024-11-16 16:40:47.103909] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:09.685 [2024-11-16 16:40:47.103920] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:09.685 [2024-11-16 16:40:47.104095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:09.685 [2024-11-16 16:40:47.104911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:09.685 [2024-11-16 16:40:47.104966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:10.620 16:40:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:10.620 16:40:47 -- common/autotest_common.sh@862 -- # return 0 00:21:10.620 16:40:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:10.620 16:40:47 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:10.620 16:40:47 -- common/autotest_common.sh@10 -- # set +x 00:21:10.620 16:40:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:10.620 16:40:47 -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:10.879 [2024-11-16 16:40:48.136150] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:10.879 16:40:48 -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:10.879 Malloc0 00:21:11.138 16:40:48 -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:11.138 16:40:48 -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:11.397 16:40:48 -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:11.656 [2024-11-16 16:40:49.062678] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:11.656 16:40:49 -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:11.915 [2024-11-16 16:40:49.350936] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:11.915 16:40:49 -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:12.177 [2024-11-16 16:40:49.631284] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:12.177 16:40:49 -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:21:12.177 16:40:49 -- host/failover.sh@31 -- # bdevperf_pid=95711 00:21:12.177 16:40:49 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:12.177 16:40:49 -- host/failover.sh@34 -- # waitforlisten 95711 /var/tmp/bdevperf.sock 00:21:12.177 16:40:49 -- common/autotest_common.sh@829 -- # '[' -z 95711 ']' 00:21:12.177 16:40:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:12.177 16:40:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:12.177 16:40:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:12.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:12.177 16:40:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:12.177 16:40:49 -- common/autotest_common.sh@10 -- # set +x 00:21:13.114 16:40:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:13.114 16:40:50 -- common/autotest_common.sh@862 -- # return 0 00:21:13.114 16:40:50 -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:13.683 NVMe0n1 00:21:13.683 16:40:50 -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:13.941 00:21:13.941 16:40:51 -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:13.941 16:40:51 -- host/failover.sh@39 -- # run_test_pid=95764 00:21:13.941 16:40:51 -- host/failover.sh@41 -- # sleep 1 00:21:14.894 16:40:52 -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:15.159 [2024-11-16 16:40:52.424409] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21c90 is same with the state(5) to be set 00:21:15.159 [2024-11-16 16:40:52.424545] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21c90 is same with the state(5) to be set 00:21:15.159 [2024-11-16 16:40:52.424557] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21c90 is same with the state(5) to be set 00:21:15.159 [2024-11-16 16:40:52.424567] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21c90 is same with the state(5) to be set 00:21:15.159 [2024-11-16 16:40:52.424575] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21c90 is same with the state(5) to be set 00:21:15.159 [2024-11-16 16:40:52.424584] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21c90 is same with the state(5) to be set 00:21:15.159 [2024-11-16 16:40:52.424592] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21c90 is same with the state(5) to be set 00:21:15.159 [2024-11-16 16:40:52.424600] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21c90 is same with the state(5) to be set 00:21:15.159 [2024-11-16 16:40:52.424608] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21c90 is same with the state(5) to be set 00:21:15.159 [2024-11-16 16:40:52.424616] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21c90 is same with the state(5) to be set 00:21:15.159 [2024-11-16 16:40:52.424623] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21c90 is same with the state(5) to be set 00:21:15.159 [2024-11-16 16:40:52.424631] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21c90 is same with the state(5) to be set 00:21:15.159 [2024-11-16 16:40:52.424639] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21c90 is same with the state(5) to be set 00:21:15.159 [2024-11-16 16:40:52.424647] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21c90 is same with the state(5) to be set 00:21:15.159 [2024-11-16 16:40:52.424655] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21c90 is same with the state(5) to be set 00:21:15.159 [2024-11-16 16:40:52.424662] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21c90 is same with the state(5) to be set 00:21:15.160 [2024-11-16 16:40:52.424669] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21c90 is same with the state(5) to be set 00:21:15.160 [2024-11-16 16:40:52.424677] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21c90 is same with the state(5) to be set 00:21:15.160 [2024-11-16 16:40:52.424685] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21c90 is same with the state(5) to be set 00:21:15.160 [2024-11-16 16:40:52.424693] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21c90 is same with the state(5) to be set 00:21:15.160 [2024-11-16 16:40:52.424700] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21c90 is same with the state(5) to be set 00:21:15.160 [2024-11-16 16:40:52.424708] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21c90 is same with the state(5) to be set 00:21:15.160 [2024-11-16 16:40:52.424721] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21c90 is same with the state(5) to be set 00:21:15.160 [2024-11-16 16:40:52.424994] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21c90 is same with the state(5) to be set 00:21:15.160 [2024-11-16 16:40:52.425018] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21c90 is same with the state(5) to be set 00:21:15.160 [2024-11-16 16:40:52.425027] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21c90 is same with the state(5) to be set 00:21:15.160 [2024-11-16 16:40:52.425035] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21c90 is same with the state(5) to be set 00:21:15.160 [2024-11-16 16:40:52.425043] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21c90 is same with the state(5) to be set 00:21:15.160 [2024-11-16 16:40:52.425052] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21c90 is same with the state(5) to be set 00:21:15.160 [2024-11-16 16:40:52.425089] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21c90 is same with the state(5) to be set 00:21:15.160 [2024-11-16 16:40:52.425102] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21c90 is same with the state(5) to be set 00:21:15.160 [2024-11-16 16:40:52.425110] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21c90 is same with the state(5) to be set 00:21:15.160 [2024-11-16 16:40:52.425128] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21c90 is same with the state(5) to be set 00:21:15.160 [2024-11-16 16:40:52.425136] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21c90 is same with the state(5) to be set 00:21:15.160 [2024-11-16 16:40:52.425144] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21c90 is same with the state(5) to be set 00:21:15.160 [2024-11-16 16:40:52.425405] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21c90 is same with the state(5) to be set 00:21:15.160 [2024-11-16 16:40:52.425427] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21c90 is same with the state(5) to be set 00:21:15.160 [2024-11-16 16:40:52.425436] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21c90 is same with the state(5) to be set 00:21:15.160 [2024-11-16 16:40:52.425445] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21c90 is same with the state(5) to be set 00:21:15.160 [2024-11-16 16:40:52.425454] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21c90 is same with the state(5) to be set 00:21:15.160 [2024-11-16 16:40:52.425463] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21c90 is same with the state(5) to be set 00:21:15.160 [2024-11-16 16:40:52.425471] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21c90 is same with the state(5) to be set 00:21:15.160 [2024-11-16 16:40:52.425480] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21c90 is same with the state(5) to be set 00:21:15.160 [2024-11-16 16:40:52.425488] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21c90 is same with the state(5) to be set 00:21:15.160 [2024-11-16 16:40:52.425496] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21c90 is same with the state(5) to be set 00:21:15.160 [2024-11-16 16:40:52.425504] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21c90 is same with the state(5) to be set 00:21:15.160 [2024-11-16 16:40:52.425665] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21c90 is same with the state(5) to be set 00:21:15.160 [2024-11-16 16:40:52.425957] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21c90 is same with the state(5) to be set 00:21:15.160 [2024-11-16 16:40:52.425971] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21c90 is same with the state(5) to be set 00:21:15.160 [2024-11-16 16:40:52.425980] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21c90 is same with the state(5) to be set 00:21:15.160 [2024-11-16 16:40:52.425989] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21c90 is same with the state(5) to be set 00:21:15.160 [2024-11-16 16:40:52.425998] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21c90 is same with the state(5) to be set 00:21:15.160 [2024-11-16 16:40:52.426007] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21c90 is same with the state(5) to be set 00:21:15.160 [2024-11-16 16:40:52.426016] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21c90 is same with the state(5) to be set 00:21:15.160 [2024-11-16 16:40:52.426026] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21c90 is same with the state(5) to be set 00:21:15.160 [2024-11-16 16:40:52.426034] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21c90 is same with the state(5) to be set 00:21:15.160 [2024-11-16 16:40:52.426300] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21c90 is same with the state(5) to be set 00:21:15.160 [2024-11-16 16:40:52.426311] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21c90 is same with the state(5) to be set 00:21:15.160 [2024-11-16 16:40:52.426320] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21c90 is same with the state(5) to be set 00:21:15.160 [2024-11-16 16:40:52.426328] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21c90 is same with the state(5) to be set 00:21:15.160 [2024-11-16 16:40:52.426336] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21c90 is same with the state(5) to be set 00:21:15.160 [2024-11-16 16:40:52.426344] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21c90 is same with the state(5) to be set 00:21:15.160 [2024-11-16 16:40:52.426352] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21c90 is same with the state(5) to be set 00:21:15.160 [2024-11-16 16:40:52.426360] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21c90 is same with the state(5) to be set 00:21:15.160 [2024-11-16 16:40:52.426374] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21c90 is same with the state(5) to be set 00:21:15.160 [2024-11-16 16:40:52.426383] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21c90 is same with the state(5) to be set 00:21:15.160 [2024-11-16 16:40:52.426391] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21c90 is same with the state(5) to be set 00:21:15.160 [2024-11-16 16:40:52.426641] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21c90 is same with the state(5) to be set 00:21:15.160 [2024-11-16 16:40:52.426661] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21c90 is same with the state(5) to be set 00:21:15.160 [2024-11-16 16:40:52.426669] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21c90 is same with the state(5) to be set 00:21:15.160 [2024-11-16 16:40:52.426678] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21c90 is same with the state(5) to be set 00:21:15.160 16:40:52 -- host/failover.sh@45 -- # sleep 3 00:21:18.448 16:40:55 -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:18.448 00:21:18.448 16:40:55 -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:18.707 [2024-11-16 16:40:56.001170] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23380 is same with the state(5) to be set 00:21:18.707 [2024-11-16 16:40:56.001242] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23380 is same with the state(5) to be set 00:21:18.707 [2024-11-16 16:40:56.001255] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23380 is same with the state(5) to be set 00:21:18.707 [2024-11-16 16:40:56.001264] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23380 is same with the state(5) to be set 00:21:18.707 [2024-11-16 16:40:56.001272] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23380 is same with the state(5) to be set 00:21:18.707 [2024-11-16 16:40:56.001281] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23380 is same with the state(5) to be set 00:21:18.707 [2024-11-16 16:40:56.001290] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23380 is same with the state(5) to be set 00:21:18.707 [2024-11-16 16:40:56.001298] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23380 is same with the state(5) to be set 00:21:18.707 [2024-11-16 16:40:56.001306] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23380 is same with the state(5) to be set 00:21:18.707 [2024-11-16 16:40:56.001314] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23380 is same with the state(5) to be set 00:21:18.707 [2024-11-16 16:40:56.001323] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23380 is same with the state(5) to be set 00:21:18.707 [2024-11-16 16:40:56.001331] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23380 is same with the state(5) to be set 00:21:18.707 [2024-11-16 16:40:56.001339] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23380 is same with the state(5) to be set 00:21:18.707 [2024-11-16 16:40:56.001347] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23380 is same with the state(5) to be set 00:21:18.707 [2024-11-16 16:40:56.001355] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23380 is same with the state(5) to be set 00:21:18.707 [2024-11-16 16:40:56.001363] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23380 is same with the state(5) to be set 00:21:18.707 [2024-11-16 16:40:56.001371] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23380 is same with the state(5) to be set 00:21:18.707 [2024-11-16 16:40:56.001380] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23380 is same with the state(5) to be set 00:21:18.707 [2024-11-16 16:40:56.001388] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23380 is same with the state(5) to be set 00:21:18.707 [2024-11-16 16:40:56.001398] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23380 is same with the state(5) to be set 00:21:18.707 [2024-11-16 16:40:56.001406] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23380 is same with the state(5) to be set 00:21:18.707 [2024-11-16 16:40:56.001414] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23380 is same with the state(5) to be set 00:21:18.707 [2024-11-16 16:40:56.001422] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23380 is same with the state(5) to be set 00:21:18.707 [2024-11-16 16:40:56.001431] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23380 is same with the state(5) to be set 00:21:18.707 [2024-11-16 16:40:56.001439] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23380 is same with the state(5) to be set 00:21:18.707 [2024-11-16 16:40:56.001448] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23380 is same with the state(5) to be set 00:21:18.707 [2024-11-16 16:40:56.001455] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23380 is same with the state(5) to be set 00:21:18.707 [2024-11-16 16:40:56.001464] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23380 is same with the state(5) to be set 00:21:18.707 [2024-11-16 16:40:56.001473] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23380 is same with the state(5) to be set 00:21:18.707 [2024-11-16 16:40:56.001481] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23380 is same with the state(5) to be set 00:21:18.707 [2024-11-16 16:40:56.001489] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23380 is same with the state(5) to be set 00:21:18.707 [2024-11-16 16:40:56.001498] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23380 is same with the state(5) to be set 00:21:18.707 [2024-11-16 16:40:56.001506] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23380 is same with the state(5) to be set 00:21:18.707 [2024-11-16 16:40:56.001514] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23380 is same with the state(5) to be set 00:21:18.707 [2024-11-16 16:40:56.001523] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23380 is same with the state(5) to be set 00:21:18.707 [2024-11-16 16:40:56.001531] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23380 is same with the state(5) to be set 00:21:18.707 [2024-11-16 16:40:56.001540] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23380 is same with the state(5) to be set 00:21:18.707 [2024-11-16 16:40:56.001548] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23380 is same with the state(5) to be set 00:21:18.707 [2024-11-16 16:40:56.001556] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23380 is same with the state(5) to be set 00:21:18.707 [2024-11-16 16:40:56.001565] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23380 is same with the state(5) to be set 00:21:18.707 [2024-11-16 16:40:56.001573] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23380 is same with the state(5) to be set 00:21:18.707 [2024-11-16 16:40:56.001581] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23380 is same with the state(5) to be set 00:21:18.707 [2024-11-16 16:40:56.001605] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23380 is same with the state(5) to be set 00:21:18.707 [2024-11-16 16:40:56.001612] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23380 is same with the state(5) to be set 00:21:18.707 [2024-11-16 16:40:56.001620] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23380 is same with the state(5) to be set 00:21:18.707 [2024-11-16 16:40:56.001627] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23380 is same with the state(5) to be set 00:21:18.707 [2024-11-16 16:40:56.001635] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23380 is same with the state(5) to be set 00:21:18.707 [2024-11-16 16:40:56.001643] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23380 is same with the state(5) to be set 00:21:18.707 [2024-11-16 16:40:56.001650] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23380 is same with the state(5) to be set 00:21:18.707 [2024-11-16 16:40:56.001658] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23380 is same with the state(5) to be set 00:21:18.707 [2024-11-16 16:40:56.001666] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23380 is same with the state(5) to be set 00:21:18.707 [2024-11-16 16:40:56.001674] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23380 is same with the state(5) to be set 00:21:18.707 [2024-11-16 16:40:56.001681] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23380 is same with the state(5) to be set 00:21:18.707 [2024-11-16 16:40:56.001689] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23380 is same with the state(5) to be set 00:21:18.707 [2024-11-16 16:40:56.001696] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23380 is same with the state(5) to be set 00:21:18.707 [2024-11-16 16:40:56.001704] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23380 is same with the state(5) to be set 00:21:18.707 [2024-11-16 16:40:56.001712] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23380 is same with the state(5) to be set 00:21:18.707 [2024-11-16 16:40:56.001720] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23380 is same with the state(5) to be set 00:21:18.707 [2024-11-16 16:40:56.001727] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23380 is same with the state(5) to be set 00:21:18.707 [2024-11-16 16:40:56.001735] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23380 is same with the state(5) to be set 00:21:18.707 [2024-11-16 16:40:56.001743] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23380 is same with the state(5) to be set 00:21:18.707 [2024-11-16 16:40:56.001751] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23380 is same with the state(5) to be set 00:21:18.707 [2024-11-16 16:40:56.001759] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23380 is same with the state(5) to be set 00:21:18.707 [2024-11-16 16:40:56.001767] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23380 is same with the state(5) to be set 00:21:18.707 [2024-11-16 16:40:56.001775] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23380 is same with the state(5) to be set 00:21:18.707 [2024-11-16 16:40:56.001783] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23380 is same with the state(5) to be set 00:21:18.707 [2024-11-16 16:40:56.001791] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23380 is same with the state(5) to be set 00:21:18.707 [2024-11-16 16:40:56.001800] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23380 is same with the state(5) to be set 00:21:18.707 [2024-11-16 16:40:56.001808] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23380 is same with the state(5) to be set 00:21:18.707 [2024-11-16 16:40:56.001815] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23380 is same with the state(5) to be set 00:21:18.707 [2024-11-16 16:40:56.001823] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23380 is same with the state(5) to be set 00:21:18.707 [2024-11-16 16:40:56.001831] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23380 is same with the state(5) to be set 00:21:18.707 [2024-11-16 16:40:56.001839] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23380 is same with the state(5) to be set 00:21:18.707 16:40:56 -- host/failover.sh@50 -- # sleep 3 00:21:21.992 16:40:59 -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:21.992 [2024-11-16 16:40:59.265426] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:21.992 16:40:59 -- host/failover.sh@55 -- # sleep 1 00:21:22.928 16:41:00 -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:23.188 [2024-11-16 16:41:00.550181] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.188 [2024-11-16 16:41:00.550254] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.188 [2024-11-16 16:41:00.550265] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.188 [2024-11-16 16:41:00.550274] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.188 [2024-11-16 16:41:00.550282] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.188 [2024-11-16 16:41:00.550290] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.188 [2024-11-16 16:41:00.550298] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.188 [2024-11-16 16:41:00.550306] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.188 [2024-11-16 16:41:00.550314] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.188 [2024-11-16 16:41:00.550321] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.188 [2024-11-16 16:41:00.550331] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.188 [2024-11-16 16:41:00.550338] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.188 [2024-11-16 16:41:00.550346] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.188 [2024-11-16 16:41:00.550353] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.188 [2024-11-16 16:41:00.550361] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.188 [2024-11-16 16:41:00.550368] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.188 [2024-11-16 16:41:00.550377] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.188 [2024-11-16 16:41:00.550385] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.188 [2024-11-16 16:41:00.550392] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.188 [2024-11-16 16:41:00.550400] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.188 [2024-11-16 16:41:00.550408] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.188 [2024-11-16 16:41:00.550416] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.188 [2024-11-16 16:41:00.550424] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.188 [2024-11-16 16:41:00.550431] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.188 [2024-11-16 16:41:00.550439] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.188 [2024-11-16 16:41:00.550447] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.188 [2024-11-16 16:41:00.550470] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.188 [2024-11-16 16:41:00.550492] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.188 [2024-11-16 16:41:00.550499] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.188 [2024-11-16 16:41:00.550506] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.188 [2024-11-16 16:41:00.550513] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.188 [2024-11-16 16:41:00.550520] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.188 [2024-11-16 16:41:00.550527] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.188 [2024-11-16 16:41:00.550550] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.188 [2024-11-16 16:41:00.550574] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.188 [2024-11-16 16:41:00.550582] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.188 [2024-11-16 16:41:00.550589] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.188 [2024-11-16 16:41:00.550598] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.188 [2024-11-16 16:41:00.550605] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.188 [2024-11-16 16:41:00.550613] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.188 [2024-11-16 16:41:00.550621] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.188 [2024-11-16 16:41:00.550628] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.188 [2024-11-16 16:41:00.550638] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.188 [2024-11-16 16:41:00.550646] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.188 [2024-11-16 16:41:00.550654] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.188 [2024-11-16 16:41:00.550661] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.188 [2024-11-16 16:41:00.550670] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.189 [2024-11-16 16:41:00.550677] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.189 [2024-11-16 16:41:00.550686] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.189 [2024-11-16 16:41:00.550693] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.189 [2024-11-16 16:41:00.550701] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.189 [2024-11-16 16:41:00.550709] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.189 [2024-11-16 16:41:00.550717] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.189 [2024-11-16 16:41:00.550725] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.189 [2024-11-16 16:41:00.550732] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.189 [2024-11-16 16:41:00.550749] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.189 [2024-11-16 16:41:00.550757] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.189 [2024-11-16 16:41:00.550764] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.189 [2024-11-16 16:41:00.550772] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.189 [2024-11-16 16:41:00.550780] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.189 [2024-11-16 16:41:00.550788] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.189 [2024-11-16 16:41:00.550795] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.189 [2024-11-16 16:41:00.550802] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.189 [2024-11-16 16:41:00.550810] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.189 [2024-11-16 16:41:00.550818] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.189 [2024-11-16 16:41:00.550826] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.189 [2024-11-16 16:41:00.550834] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.189 [2024-11-16 16:41:00.550842] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.189 [2024-11-16 16:41:00.550850] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.189 [2024-11-16 16:41:00.550857] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.189 [2024-11-16 16:41:00.550865] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.189 [2024-11-16 16:41:00.550872] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.189 [2024-11-16 16:41:00.550880] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.189 [2024-11-16 16:41:00.550887] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.189 [2024-11-16 16:41:00.550895] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.189 [2024-11-16 16:41:00.550904] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.189 [2024-11-16 16:41:00.550914] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.189 [2024-11-16 16:41:00.550921] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.189 [2024-11-16 16:41:00.550929] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.189 [2024-11-16 16:41:00.550936] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.189 [2024-11-16 16:41:00.550946] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.189 [2024-11-16 16:41:00.550954] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.189 [2024-11-16 16:41:00.550961] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.189 [2024-11-16 16:41:00.550969] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23a60 is same with the state(5) to be set 00:21:23.189 16:41:00 -- host/failover.sh@59 -- # wait 95764 00:21:29.763 0 00:21:29.763 16:41:06 -- host/failover.sh@61 -- # killprocess 95711 00:21:29.763 16:41:06 -- common/autotest_common.sh@936 -- # '[' -z 95711 ']' 00:21:29.763 16:41:06 -- common/autotest_common.sh@940 -- # kill -0 95711 00:21:29.763 16:41:06 -- common/autotest_common.sh@941 -- # uname 00:21:29.764 16:41:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:29.764 16:41:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 95711 00:21:29.764 killing process with pid 95711 00:21:29.764 16:41:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:29.764 16:41:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:29.764 16:41:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 95711' 00:21:29.764 16:41:06 -- common/autotest_common.sh@955 -- # kill 95711 00:21:29.764 16:41:06 -- common/autotest_common.sh@960 -- # wait 95711 00:21:29.764 16:41:06 -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:29.764 [2024-11-16 16:40:49.687862] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:29.764 [2024-11-16 16:40:49.687955] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95711 ] 00:21:29.764 [2024-11-16 16:40:49.819843] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:29.764 [2024-11-16 16:40:49.902975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:29.764 Running I/O for 15 seconds... 00:21:29.764 [2024-11-16 16:40:52.427325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.764 [2024-11-16 16:40:52.427370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.764 [2024-11-16 16:40:52.427396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:6328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.764 [2024-11-16 16:40:52.427411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.764 [2024-11-16 16:40:52.427427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.764 [2024-11-16 16:40:52.427456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.764 [2024-11-16 16:40:52.427470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:6360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.764 [2024-11-16 16:40:52.427498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.764 [2024-11-16 16:40:52.427512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:6376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.764 [2024-11-16 16:40:52.427524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.764 [2024-11-16 16:40:52.427539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.764 [2024-11-16 16:40:52.427551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.764 [2024-11-16 16:40:52.427565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:5624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.764 [2024-11-16 16:40:52.427578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.764 [2024-11-16 16:40:52.427592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:5648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.764 [2024-11-16 16:40:52.427604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.764 [2024-11-16 16:40:52.427618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.764 [2024-11-16 16:40:52.427630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.764 [2024-11-16 16:40:52.427644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:5664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.764 [2024-11-16 16:40:52.427656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.764 [2024-11-16 16:40:52.427670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:5672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.764 [2024-11-16 16:40:52.427682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.764 [2024-11-16 16:40:52.427715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:5688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.764 [2024-11-16 16:40:52.427729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.764 [2024-11-16 16:40:52.427744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:5696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.764 [2024-11-16 16:40:52.427756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.764 [2024-11-16 16:40:52.427770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:5704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.764 [2024-11-16 16:40:52.427782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.764 [2024-11-16 16:40:52.427795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:5712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.764 [2024-11-16 16:40:52.427808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.764 [2024-11-16 16:40:52.427821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:5736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.764 [2024-11-16 16:40:52.427834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.764 [2024-11-16 16:40:52.427847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:5744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.764 [2024-11-16 16:40:52.427866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.764 [2024-11-16 16:40:52.427880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:5752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.764 [2024-11-16 16:40:52.427893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.764 [2024-11-16 16:40:52.427906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:5760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.764 [2024-11-16 16:40:52.427919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.764 [2024-11-16 16:40:52.427933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:5768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.764 [2024-11-16 16:40:52.427945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.764 [2024-11-16 16:40:52.427958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:5784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.764 [2024-11-16 16:40:52.427970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.764 [2024-11-16 16:40:52.427984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:6384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.764 [2024-11-16 16:40:52.427996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.764 [2024-11-16 16:40:52.428010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:6400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.764 [2024-11-16 16:40:52.428022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.764 [2024-11-16 16:40:52.428036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:6416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.764 [2024-11-16 16:40:52.428055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.764 [2024-11-16 16:40:52.428086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:6424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.764 [2024-11-16 16:40:52.428112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.764 [2024-11-16 16:40:52.428130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:6432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.764 [2024-11-16 16:40:52.428143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.764 [2024-11-16 16:40:52.428157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:6440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.764 [2024-11-16 16:40:52.428169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.764 [2024-11-16 16:40:52.428183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:6448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.764 [2024-11-16 16:40:52.428196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.764 [2024-11-16 16:40:52.428210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:6456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.764 [2024-11-16 16:40:52.428222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.764 [2024-11-16 16:40:52.428236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:6464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.764 [2024-11-16 16:40:52.428248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.764 [2024-11-16 16:40:52.428262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:5824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.764 [2024-11-16 16:40:52.428274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.764 [2024-11-16 16:40:52.428288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:5832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.765 [2024-11-16 16:40:52.428300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-11-16 16:40:52.428320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:5840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.765 [2024-11-16 16:40:52.428338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-11-16 16:40:52.428352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.765 [2024-11-16 16:40:52.428364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-11-16 16:40:52.428379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:5880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.765 [2024-11-16 16:40:52.428391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-11-16 16:40:52.428405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:5888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.765 [2024-11-16 16:40:52.428417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-11-16 16:40:52.428431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:5896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.765 [2024-11-16 16:40:52.428465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-11-16 16:40:52.428480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:5912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.765 [2024-11-16 16:40:52.428492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-11-16 16:40:52.428505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:5920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.765 [2024-11-16 16:40:52.428518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-11-16 16:40:52.428531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:5936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.765 [2024-11-16 16:40:52.428543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-11-16 16:40:52.428557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:5952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.765 [2024-11-16 16:40:52.428568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-11-16 16:40:52.428582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:5968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.765 [2024-11-16 16:40:52.428594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-11-16 16:40:52.428608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:5984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.765 [2024-11-16 16:40:52.428620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-11-16 16:40:52.428633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:5992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.765 [2024-11-16 16:40:52.428645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-11-16 16:40:52.428659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.765 [2024-11-16 16:40:52.428671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-11-16 16:40:52.428684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.765 [2024-11-16 16:40:52.428696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-11-16 16:40:52.428709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:6472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.765 [2024-11-16 16:40:52.428721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-11-16 16:40:52.428735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:6488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.765 [2024-11-16 16:40:52.428747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-11-16 16:40:52.428765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:6512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.765 [2024-11-16 16:40:52.428783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-11-16 16:40:52.428802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:6520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.765 [2024-11-16 16:40:52.428815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-11-16 16:40:52.428829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:6536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.765 [2024-11-16 16:40:52.428841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-11-16 16:40:52.428855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:6544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.765 [2024-11-16 16:40:52.428867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-11-16 16:40:52.428881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:6560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.765 [2024-11-16 16:40:52.428892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-11-16 16:40:52.428906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:6568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.765 [2024-11-16 16:40:52.428919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-11-16 16:40:52.428932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:6584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.765 [2024-11-16 16:40:52.428944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-11-16 16:40:52.428958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.765 [2024-11-16 16:40:52.428970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-11-16 16:40:52.428984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:6616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.765 [2024-11-16 16:40:52.428996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-11-16 16:40:52.429010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:6032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.765 [2024-11-16 16:40:52.429022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-11-16 16:40:52.429035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:6040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.765 [2024-11-16 16:40:52.429047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-11-16 16:40:52.429060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.765 [2024-11-16 16:40:52.429099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-11-16 16:40:52.429114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:6088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.765 [2024-11-16 16:40:52.429127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-11-16 16:40:52.429141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:6104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.765 [2024-11-16 16:40:52.429159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-11-16 16:40:52.429174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:6112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.765 [2024-11-16 16:40:52.429212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-11-16 16:40:52.429228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.765 [2024-11-16 16:40:52.429242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-11-16 16:40:52.429261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:6152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.765 [2024-11-16 16:40:52.429279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-11-16 16:40:52.429295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:6624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.765 [2024-11-16 16:40:52.429308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-11-16 16:40:52.429323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:6632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.765 [2024-11-16 16:40:52.429336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-11-16 16:40:52.429350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:6640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.765 [2024-11-16 16:40:52.429364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-11-16 16:40:52.429378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:6648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.765 [2024-11-16 16:40:52.429391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-11-16 16:40:52.429406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.765 [2024-11-16 16:40:52.429419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.765 [2024-11-16 16:40:52.429448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:6664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.765 [2024-11-16 16:40:52.429461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-11-16 16:40:52.429476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:6672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.766 [2024-11-16 16:40:52.429489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-11-16 16:40:52.429503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:6680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.766 [2024-11-16 16:40:52.429530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-11-16 16:40:52.429545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:6688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-11-16 16:40:52.429557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-11-16 16:40:52.429571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:6696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-11-16 16:40:52.429604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-11-16 16:40:52.429618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:6704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-11-16 16:40:52.429630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-11-16 16:40:52.429644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:6712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-11-16 16:40:52.429656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-11-16 16:40:52.429669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:6720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-11-16 16:40:52.429681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-11-16 16:40:52.429695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-11-16 16:40:52.429707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-11-16 16:40:52.429720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:6736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.766 [2024-11-16 16:40:52.429732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-11-16 16:40:52.429766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.766 [2024-11-16 16:40:52.429783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-11-16 16:40:52.429798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:6752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-11-16 16:40:52.429810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-11-16 16:40:52.429824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-11-16 16:40:52.429836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-11-16 16:40:52.429850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:6768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.766 [2024-11-16 16:40:52.429863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-11-16 16:40:52.429894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:6776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.766 [2024-11-16 16:40:52.429907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-11-16 16:40:52.429921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:6784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.766 [2024-11-16 16:40:52.429934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-11-16 16:40:52.429964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:6792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-11-16 16:40:52.429977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-11-16 16:40:52.429998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:6168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-11-16 16:40:52.430011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-11-16 16:40:52.430026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:6184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-11-16 16:40:52.430040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-11-16 16:40:52.430054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:6192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-11-16 16:40:52.430068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-11-16 16:40:52.430082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:6200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-11-16 16:40:52.430095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-11-16 16:40:52.430110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-11-16 16:40:52.430123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-11-16 16:40:52.430138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:6240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-11-16 16:40:52.430150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-11-16 16:40:52.430190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:6280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-11-16 16:40:52.430205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-11-16 16:40:52.430220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:6288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-11-16 16:40:52.430233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-11-16 16:40:52.430247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:6800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.766 [2024-11-16 16:40:52.430260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-11-16 16:40:52.430279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:6808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.766 [2024-11-16 16:40:52.430298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-11-16 16:40:52.430313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:6816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-11-16 16:40:52.430326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-11-16 16:40:52.430340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-11-16 16:40:52.430353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-11-16 16:40:52.430368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:6832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.766 [2024-11-16 16:40:52.430388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-11-16 16:40:52.430403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-11-16 16:40:52.430416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-11-16 16:40:52.430431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:6848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.766 [2024-11-16 16:40:52.430444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-11-16 16:40:52.430458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:6856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-11-16 16:40:52.430471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-11-16 16:40:52.430485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:6864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.766 [2024-11-16 16:40:52.430498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-11-16 16:40:52.430513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:6296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-11-16 16:40:52.430526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-11-16 16:40:52.430540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:6312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-11-16 16:40:52.430553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-11-16 16:40:52.430567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:6320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-11-16 16:40:52.430580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-11-16 16:40:52.430594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:6336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-11-16 16:40:52.430607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-11-16 16:40:52.430621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:6352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-11-16 16:40:52.430634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-11-16 16:40:52.430649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:6368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-11-16 16:40:52.430662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.766 [2024-11-16 16:40:52.430677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:6392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.766 [2024-11-16 16:40:52.430689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.767 [2024-11-16 16:40:52.430704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:6408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.767 [2024-11-16 16:40:52.430716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.767 [2024-11-16 16:40:52.430741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:6872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.767 [2024-11-16 16:40:52.430756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.767 [2024-11-16 16:40:52.430771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.767 [2024-11-16 16:40:52.430783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.767 [2024-11-16 16:40:52.430797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.767 [2024-11-16 16:40:52.430810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.767 [2024-11-16 16:40:52.430824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:6896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.767 [2024-11-16 16:40:52.430837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.767 [2024-11-16 16:40:52.430851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:6904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.767 [2024-11-16 16:40:52.430864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.767 [2024-11-16 16:40:52.430878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:6912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.767 [2024-11-16 16:40:52.430891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.767 [2024-11-16 16:40:52.430905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.767 [2024-11-16 16:40:52.430917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.767 [2024-11-16 16:40:52.430932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:6928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.767 [2024-11-16 16:40:52.430944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.767 [2024-11-16 16:40:52.430959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:6480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.767 [2024-11-16 16:40:52.430972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.767 [2024-11-16 16:40:52.430986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:6496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.767 [2024-11-16 16:40:52.430998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.767 [2024-11-16 16:40:52.431012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:6504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.767 [2024-11-16 16:40:52.431025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.767 [2024-11-16 16:40:52.431039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:6528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.767 [2024-11-16 16:40:52.431052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.767 [2024-11-16 16:40:52.431066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:6552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.767 [2024-11-16 16:40:52.431089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.767 [2024-11-16 16:40:52.431110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:6576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.767 [2024-11-16 16:40:52.431124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.767 [2024-11-16 16:40:52.431138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:6600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.767 [2024-11-16 16:40:52.431151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.767 [2024-11-16 16:40:52.431165] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba6130 is same with the state(5) to be set 00:21:29.767 [2024-11-16 16:40:52.431181] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.767 [2024-11-16 16:40:52.431196] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.767 [2024-11-16 16:40:52.431207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6608 len:8 PRP1 0x0 PRP2 0x0 00:21:29.767 [2024-11-16 16:40:52.431220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.767 [2024-11-16 16:40:52.431276] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xba6130 was disconnected and freed. reset controller. 00:21:29.767 [2024-11-16 16:40:52.431293] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:29.767 [2024-11-16 16:40:52.431347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:29.767 [2024-11-16 16:40:52.431368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.767 [2024-11-16 16:40:52.431383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:29.767 [2024-11-16 16:40:52.431396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.767 [2024-11-16 16:40:52.431409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:29.767 [2024-11-16 16:40:52.431422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.767 [2024-11-16 16:40:52.431436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:29.767 [2024-11-16 16:40:52.431448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.767 [2024-11-16 16:40:52.431461] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:29.767 [2024-11-16 16:40:52.431535] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb21cb0 (9): Bad file descriptor 00:21:29.767 [2024-11-16 16:40:52.433810] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:29.767 [2024-11-16 16:40:52.462273] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:29.767 [2024-11-16 16:40:56.002087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:39800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.767 [2024-11-16 16:40:56.002146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.767 [2024-11-16 16:40:56.002175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:39808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.767 [2024-11-16 16:40:56.002192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.767 [2024-11-16 16:40:56.002226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:39832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.767 [2024-11-16 16:40:56.002242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.767 [2024-11-16 16:40:56.002258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.767 [2024-11-16 16:40:56.002272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.767 [2024-11-16 16:40:56.002288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:39232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.767 [2024-11-16 16:40:56.002302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.767 [2024-11-16 16:40:56.002318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:39240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.767 [2024-11-16 16:40:56.002331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.767 [2024-11-16 16:40:56.002348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:39256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.767 [2024-11-16 16:40:56.002377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.767 [2024-11-16 16:40:56.002392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:39264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.767 [2024-11-16 16:40:56.002405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.767 [2024-11-16 16:40:56.002420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:39272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.767 [2024-11-16 16:40:56.002433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.767 [2024-11-16 16:40:56.002463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:39304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.767 [2024-11-16 16:40:56.002475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.767 [2024-11-16 16:40:56.002506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:39320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.767 [2024-11-16 16:40:56.002536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.767 [2024-11-16 16:40:56.002551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:39336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.767 [2024-11-16 16:40:56.002565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.767 [2024-11-16 16:40:56.002581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:39888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.767 [2024-11-16 16:40:56.002594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.767 [2024-11-16 16:40:56.002610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:39928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.767 [2024-11-16 16:40:56.002623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.767 [2024-11-16 16:40:56.002639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:39944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.767 [2024-11-16 16:40:56.002659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.768 [2024-11-16 16:40:56.002676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:39952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.768 [2024-11-16 16:40:56.002691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.768 [2024-11-16 16:40:56.002707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:39960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.768 [2024-11-16 16:40:56.002722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.768 [2024-11-16 16:40:56.002738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:39968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.768 [2024-11-16 16:40:56.002752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.768 [2024-11-16 16:40:56.002767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:39976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.768 [2024-11-16 16:40:56.002781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.768 [2024-11-16 16:40:56.002797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:39984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.768 [2024-11-16 16:40:56.002811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.768 [2024-11-16 16:40:56.002827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.768 [2024-11-16 16:40:56.002855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.768 [2024-11-16 16:40:56.002870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:39344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.768 [2024-11-16 16:40:56.002883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.768 [2024-11-16 16:40:56.002915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:39352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.768 [2024-11-16 16:40:56.002929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.768 [2024-11-16 16:40:56.002944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:39360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.768 [2024-11-16 16:40:56.002958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.768 [2024-11-16 16:40:56.002973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:39368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.768 [2024-11-16 16:40:56.002987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.768 [2024-11-16 16:40:56.003003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:39408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.768 [2024-11-16 16:40:56.003017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.768 [2024-11-16 16:40:56.003032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:39432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.768 [2024-11-16 16:40:56.003046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.768 [2024-11-16 16:40:56.003068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:39448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.768 [2024-11-16 16:40:56.003083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.768 [2024-11-16 16:40:56.003099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:39464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.768 [2024-11-16 16:40:56.003113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.768 [2024-11-16 16:40:56.003128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:40000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.768 [2024-11-16 16:40:56.003153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.768 [2024-11-16 16:40:56.003172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:40008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.768 [2024-11-16 16:40:56.003186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.768 [2024-11-16 16:40:56.003202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:40016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.768 [2024-11-16 16:40:56.003223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.768 [2024-11-16 16:40:56.003241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:40024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.768 [2024-11-16 16:40:56.003255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.768 [2024-11-16 16:40:56.003271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:40032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.768 [2024-11-16 16:40:56.003285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.768 [2024-11-16 16:40:56.003301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:40040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.768 [2024-11-16 16:40:56.003315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.768 [2024-11-16 16:40:56.003331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:40048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.768 [2024-11-16 16:40:56.003345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.768 [2024-11-16 16:40:56.003360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:40056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.768 [2024-11-16 16:40:56.003374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.768 [2024-11-16 16:40:56.003390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:40064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.768 [2024-11-16 16:40:56.003404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.768 [2024-11-16 16:40:56.003419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:40072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.768 [2024-11-16 16:40:56.003433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.768 [2024-11-16 16:40:56.003462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:40080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.768 [2024-11-16 16:40:56.003482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.768 [2024-11-16 16:40:56.003499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:40088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.768 [2024-11-16 16:40:56.003513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.768 [2024-11-16 16:40:56.003527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:40096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.768 [2024-11-16 16:40:56.003555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.768 [2024-11-16 16:40:56.003570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.768 [2024-11-16 16:40:56.003583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.768 [2024-11-16 16:40:56.003598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:40112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.768 [2024-11-16 16:40:56.003626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.768 [2024-11-16 16:40:56.003640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:40120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.768 [2024-11-16 16:40:56.003653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.768 [2024-11-16 16:40:56.003667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:40128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.768 [2024-11-16 16:40:56.003680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.768 [2024-11-16 16:40:56.003710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:40136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.768 [2024-11-16 16:40:56.003723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.768 [2024-11-16 16:40:56.003737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:40144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.769 [2024-11-16 16:40:56.003761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.769 [2024-11-16 16:40:56.003776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:40152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.769 [2024-11-16 16:40:56.003789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.769 [2024-11-16 16:40:56.003804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:40160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.769 [2024-11-16 16:40:56.003817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.769 [2024-11-16 16:40:56.003831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:40168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.769 [2024-11-16 16:40:56.003845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.769 [2024-11-16 16:40:56.003860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:40176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.769 [2024-11-16 16:40:56.003872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.769 [2024-11-16 16:40:56.003887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:40184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.769 [2024-11-16 16:40:56.003906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.769 [2024-11-16 16:40:56.003921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:39488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.769 [2024-11-16 16:40:56.003934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.769 [2024-11-16 16:40:56.003949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:39512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.769 [2024-11-16 16:40:56.003962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.769 [2024-11-16 16:40:56.003991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:39528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.769 [2024-11-16 16:40:56.004019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.769 [2024-11-16 16:40:56.004034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:39568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.769 [2024-11-16 16:40:56.004047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.769 [2024-11-16 16:40:56.004078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:39600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.769 [2024-11-16 16:40:56.004092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.769 [2024-11-16 16:40:56.004108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:39616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.769 [2024-11-16 16:40:56.004122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.769 [2024-11-16 16:40:56.004138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:39624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.769 [2024-11-16 16:40:56.004173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.769 [2024-11-16 16:40:56.004190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:39632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.769 [2024-11-16 16:40:56.004204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.769 [2024-11-16 16:40:56.004220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:39640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.769 [2024-11-16 16:40:56.004234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.769 [2024-11-16 16:40:56.004249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:39664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.769 [2024-11-16 16:40:56.004263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.769 [2024-11-16 16:40:56.004278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:39680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.769 [2024-11-16 16:40:56.004302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.769 [2024-11-16 16:40:56.004318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:39696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.769 [2024-11-16 16:40:56.004332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.769 [2024-11-16 16:40:56.004354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:39704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.769 [2024-11-16 16:40:56.004369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.769 [2024-11-16 16:40:56.004384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:39712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.769 [2024-11-16 16:40:56.004398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.769 [2024-11-16 16:40:56.004414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:39720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.769 [2024-11-16 16:40:56.004428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.769 [2024-11-16 16:40:56.004472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:39744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.769 [2024-11-16 16:40:56.004485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.769 [2024-11-16 16:40:56.004500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:40192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.769 [2024-11-16 16:40:56.004513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.769 [2024-11-16 16:40:56.004551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:40200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.769 [2024-11-16 16:40:56.004563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.769 [2024-11-16 16:40:56.004577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:40208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.769 [2024-11-16 16:40:56.004606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.769 [2024-11-16 16:40:56.004621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:40216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.769 [2024-11-16 16:40:56.004634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.769 [2024-11-16 16:40:56.004648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:40224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.769 [2024-11-16 16:40:56.004676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.769 [2024-11-16 16:40:56.004690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.769 [2024-11-16 16:40:56.004703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.769 [2024-11-16 16:40:56.004717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.769 [2024-11-16 16:40:56.004745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.769 [2024-11-16 16:40:56.004759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:40248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.769 [2024-11-16 16:40:56.004772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.769 [2024-11-16 16:40:56.004787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.769 [2024-11-16 16:40:56.004805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.769 [2024-11-16 16:40:56.004836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:40264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.769 [2024-11-16 16:40:56.004848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.769 [2024-11-16 16:40:56.004862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:40272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.769 [2024-11-16 16:40:56.004896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.769 [2024-11-16 16:40:56.004927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:40280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.769 [2024-11-16 16:40:56.004940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.769 [2024-11-16 16:40:56.004955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:40288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.769 [2024-11-16 16:40:56.004968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.769 [2024-11-16 16:40:56.004982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:40296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.769 [2024-11-16 16:40:56.004995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.769 [2024-11-16 16:40:56.005010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:40304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.769 [2024-11-16 16:40:56.005023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.769 [2024-11-16 16:40:56.005037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:40312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.769 [2024-11-16 16:40:56.005050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.769 [2024-11-16 16:40:56.005081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:40320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.769 [2024-11-16 16:40:56.005096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.769 [2024-11-16 16:40:56.005112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:40328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.769 [2024-11-16 16:40:56.005126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.769 [2024-11-16 16:40:56.005142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:40336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.770 [2024-11-16 16:40:56.005156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.770 [2024-11-16 16:40:56.005172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:40344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.770 [2024-11-16 16:40:56.005204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.770 [2024-11-16 16:40:56.005224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:40352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.770 [2024-11-16 16:40:56.005238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.770 [2024-11-16 16:40:56.005262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:40360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.770 [2024-11-16 16:40:56.005277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.770 [2024-11-16 16:40:56.005293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:40368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.770 [2024-11-16 16:40:56.005307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.770 [2024-11-16 16:40:56.005332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:40376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.770 [2024-11-16 16:40:56.005346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.770 [2024-11-16 16:40:56.005362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:40384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.770 [2024-11-16 16:40:56.005376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.770 [2024-11-16 16:40:56.005393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:40392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.770 [2024-11-16 16:40:56.005407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.770 [2024-11-16 16:40:56.005423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:40400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.770 [2024-11-16 16:40:56.005462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.770 [2024-11-16 16:40:56.005508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:40408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.770 [2024-11-16 16:40:56.005520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.770 [2024-11-16 16:40:56.005534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:40416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.770 [2024-11-16 16:40:56.005546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.770 [2024-11-16 16:40:56.005560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.770 [2024-11-16 16:40:56.005572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.770 [2024-11-16 16:40:56.005586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:40432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.770 [2024-11-16 16:40:56.005598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.770 [2024-11-16 16:40:56.005627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:40440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.770 [2024-11-16 16:40:56.005639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.770 [2024-11-16 16:40:56.005652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:40448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.770 [2024-11-16 16:40:56.005664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.770 [2024-11-16 16:40:56.005677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:40456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.770 [2024-11-16 16:40:56.005690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.770 [2024-11-16 16:40:56.005709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:40464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.770 [2024-11-16 16:40:56.005722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.770 [2024-11-16 16:40:56.005736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.770 [2024-11-16 16:40:56.005748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.770 [2024-11-16 16:40:56.005762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:40480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.770 [2024-11-16 16:40:56.005773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.770 [2024-11-16 16:40:56.005803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:40488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.770 [2024-11-16 16:40:56.005815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.770 [2024-11-16 16:40:56.005829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:40496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.770 [2024-11-16 16:40:56.005841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.770 [2024-11-16 16:40:56.005855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.770 [2024-11-16 16:40:56.005867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.770 [2024-11-16 16:40:56.005881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:40512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.770 [2024-11-16 16:40:56.005893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.770 [2024-11-16 16:40:56.005907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:40520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.770 [2024-11-16 16:40:56.005919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.770 [2024-11-16 16:40:56.005933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:40528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.770 [2024-11-16 16:40:56.005969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.770 [2024-11-16 16:40:56.005983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:39768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.770 [2024-11-16 16:40:56.005995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.770 [2024-11-16 16:40:56.006009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:39776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.770 [2024-11-16 16:40:56.006021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.770 [2024-11-16 16:40:56.006034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:39784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.770 [2024-11-16 16:40:56.006046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.770 [2024-11-16 16:40:56.006061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:39792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.770 [2024-11-16 16:40:56.006113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.770 [2024-11-16 16:40:56.006144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:39816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.770 [2024-11-16 16:40:56.006158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.770 [2024-11-16 16:40:56.006173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.770 [2024-11-16 16:40:56.006199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.770 [2024-11-16 16:40:56.006218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:39840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.770 [2024-11-16 16:40:56.006232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.770 [2024-11-16 16:40:56.006247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:39848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.770 [2024-11-16 16:40:56.006261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.770 [2024-11-16 16:40:56.006276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.770 [2024-11-16 16:40:56.006289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.770 [2024-11-16 16:40:56.006304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:39864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.770 [2024-11-16 16:40:56.006318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.770 [2024-11-16 16:40:56.006333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:39880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.770 [2024-11-16 16:40:56.006347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.770 [2024-11-16 16:40:56.006362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:39896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.770 [2024-11-16 16:40:56.006375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.770 [2024-11-16 16:40:56.006390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:39904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.770 [2024-11-16 16:40:56.006404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.770 [2024-11-16 16:40:56.006451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:39912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.770 [2024-11-16 16:40:56.006495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.770 [2024-11-16 16:40:56.006509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:39920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.770 [2024-11-16 16:40:56.006522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.771 [2024-11-16 16:40:56.006536] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb80b10 is same with the state(5) to be set 00:21:29.771 [2024-11-16 16:40:56.006556] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.771 [2024-11-16 16:40:56.006573] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.771 [2024-11-16 16:40:56.006584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39936 len:8 PRP1 0x0 PRP2 0x0 00:21:29.771 [2024-11-16 16:40:56.006596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.771 [2024-11-16 16:40:56.006652] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xb80b10 was disconnected and freed. reset controller. 00:21:29.771 [2024-11-16 16:40:56.006669] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:21:29.771 [2024-11-16 16:40:56.006720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:29.771 [2024-11-16 16:40:56.006741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.771 [2024-11-16 16:40:56.006755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:29.771 [2024-11-16 16:40:56.006768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.771 [2024-11-16 16:40:56.006796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:29.771 [2024-11-16 16:40:56.006807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.771 [2024-11-16 16:40:56.006819] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:29.771 [2024-11-16 16:40:56.006831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.771 [2024-11-16 16:40:56.006843] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:29.771 [2024-11-16 16:40:56.006885] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb21cb0 (9): Bad file descriptor 00:21:29.771 [2024-11-16 16:40:56.009323] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:29.771 [2024-11-16 16:40:56.038163] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:29.771 [2024-11-16 16:41:00.551059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:56480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.771 [2024-11-16 16:41:00.551132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.771 [2024-11-16 16:41:00.551174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:56488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.771 [2024-11-16 16:41:00.551191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.771 [2024-11-16 16:41:00.551207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:56504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.771 [2024-11-16 16:41:00.551221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.771 [2024-11-16 16:41:00.551237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:56512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.771 [2024-11-16 16:41:00.551250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.771 [2024-11-16 16:41:00.551266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:56520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.771 [2024-11-16 16:41:00.551280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.771 [2024-11-16 16:41:00.551313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:56528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.771 [2024-11-16 16:41:00.551327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.771 [2024-11-16 16:41:00.551342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:56544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.771 [2024-11-16 16:41:00.551356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.771 [2024-11-16 16:41:00.551371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:56552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.771 [2024-11-16 16:41:00.551385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.771 [2024-11-16 16:41:00.551400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:56560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.771 [2024-11-16 16:41:00.551413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.771 [2024-11-16 16:41:00.551443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:56576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.771 [2024-11-16 16:41:00.551456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.771 [2024-11-16 16:41:00.551471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:56584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.771 [2024-11-16 16:41:00.551484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.771 [2024-11-16 16:41:00.551498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:55824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.771 [2024-11-16 16:41:00.551511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.771 [2024-11-16 16:41:00.551526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:55840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.771 [2024-11-16 16:41:00.551538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.771 [2024-11-16 16:41:00.551553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:55848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.771 [2024-11-16 16:41:00.551566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.771 [2024-11-16 16:41:00.551580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:55856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.771 [2024-11-16 16:41:00.551593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.771 [2024-11-16 16:41:00.551608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:55864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.771 [2024-11-16 16:41:00.551621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.771 [2024-11-16 16:41:00.551636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:55912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.771 [2024-11-16 16:41:00.551665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.771 [2024-11-16 16:41:00.551695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:55928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.771 [2024-11-16 16:41:00.551714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.771 [2024-11-16 16:41:00.551730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:55952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.771 [2024-11-16 16:41:00.551743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.771 [2024-11-16 16:41:00.551757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:55968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.771 [2024-11-16 16:41:00.551769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.771 [2024-11-16 16:41:00.551784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:55976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.771 [2024-11-16 16:41:00.551796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.771 [2024-11-16 16:41:00.551810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:55992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.771 [2024-11-16 16:41:00.551822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.771 [2024-11-16 16:41:00.551836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:56008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.771 [2024-11-16 16:41:00.551849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.771 [2024-11-16 16:41:00.551863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:56016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.771 [2024-11-16 16:41:00.551876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.771 [2024-11-16 16:41:00.551890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:56040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.771 [2024-11-16 16:41:00.551902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.771 [2024-11-16 16:41:00.551933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:56048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.771 [2024-11-16 16:41:00.551945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.771 [2024-11-16 16:41:00.551960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:56056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.771 [2024-11-16 16:41:00.551972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.771 [2024-11-16 16:41:00.551987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:56592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.771 [2024-11-16 16:41:00.552000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.771 [2024-11-16 16:41:00.552014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:56608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.771 [2024-11-16 16:41:00.552027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.771 [2024-11-16 16:41:00.552041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:56624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.771 [2024-11-16 16:41:00.552054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.771 [2024-11-16 16:41:00.552085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:56632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.771 [2024-11-16 16:41:00.552127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.772 [2024-11-16 16:41:00.552156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:56640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.772 [2024-11-16 16:41:00.552173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.772 [2024-11-16 16:41:00.552189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:56680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.772 [2024-11-16 16:41:00.552204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.772 [2024-11-16 16:41:00.552231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:56688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.772 [2024-11-16 16:41:00.552245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.772 [2024-11-16 16:41:00.552260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:56704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.772 [2024-11-16 16:41:00.552274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.772 [2024-11-16 16:41:00.552289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:56712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.772 [2024-11-16 16:41:00.552303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.772 [2024-11-16 16:41:00.552318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:56728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.772 [2024-11-16 16:41:00.552332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.772 [2024-11-16 16:41:00.552347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:56072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.772 [2024-11-16 16:41:00.552361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.772 [2024-11-16 16:41:00.552377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:56080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.772 [2024-11-16 16:41:00.552391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.772 [2024-11-16 16:41:00.552406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:56088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.772 [2024-11-16 16:41:00.552420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.772 [2024-11-16 16:41:00.552435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:56096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.772 [2024-11-16 16:41:00.552479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.772 [2024-11-16 16:41:00.552494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:56104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.772 [2024-11-16 16:41:00.552507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.772 [2024-11-16 16:41:00.552521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:56128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.772 [2024-11-16 16:41:00.552548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.772 [2024-11-16 16:41:00.552569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:56152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.772 [2024-11-16 16:41:00.552582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.772 [2024-11-16 16:41:00.552596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:56168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.772 [2024-11-16 16:41:00.552609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.772 [2024-11-16 16:41:00.552623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:56736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.772 [2024-11-16 16:41:00.552636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.772 [2024-11-16 16:41:00.552650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:56744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.772 [2024-11-16 16:41:00.552663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.772 [2024-11-16 16:41:00.552678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:56752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.772 [2024-11-16 16:41:00.552690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.772 [2024-11-16 16:41:00.552704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:56760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.772 [2024-11-16 16:41:00.552717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.772 [2024-11-16 16:41:00.552731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:56768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.772 [2024-11-16 16:41:00.552744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.772 [2024-11-16 16:41:00.552758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:56776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.772 [2024-11-16 16:41:00.552771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.772 [2024-11-16 16:41:00.552786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:56792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.772 [2024-11-16 16:41:00.552799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.772 [2024-11-16 16:41:00.552813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:56800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.772 [2024-11-16 16:41:00.552825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.772 [2024-11-16 16:41:00.552839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:56832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.772 [2024-11-16 16:41:00.552851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.772 [2024-11-16 16:41:00.552865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:56184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.772 [2024-11-16 16:41:00.552878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.772 [2024-11-16 16:41:00.552892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:56192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.772 [2024-11-16 16:41:00.552910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.772 [2024-11-16 16:41:00.552924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:56224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.772 [2024-11-16 16:41:00.552937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.772 [2024-11-16 16:41:00.552951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:56240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.772 [2024-11-16 16:41:00.552963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.772 [2024-11-16 16:41:00.552977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:56256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.772 [2024-11-16 16:41:00.552990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.772 [2024-11-16 16:41:00.553004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:56264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.772 [2024-11-16 16:41:00.553016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.772 [2024-11-16 16:41:00.553030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:56272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.772 [2024-11-16 16:41:00.553043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.772 [2024-11-16 16:41:00.553057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:56288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.772 [2024-11-16 16:41:00.553103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.772 [2024-11-16 16:41:00.553118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:56840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.772 [2024-11-16 16:41:00.553141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.772 [2024-11-16 16:41:00.553158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:56848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.772 [2024-11-16 16:41:00.553172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.772 [2024-11-16 16:41:00.553213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:56856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.772 [2024-11-16 16:41:00.553230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.772 [2024-11-16 16:41:00.553245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:56864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.772 [2024-11-16 16:41:00.553259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.772 [2024-11-16 16:41:00.553274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:56872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.772 [2024-11-16 16:41:00.553288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.772 [2024-11-16 16:41:00.553304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:56880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.772 [2024-11-16 16:41:00.553317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.772 [2024-11-16 16:41:00.553341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:56296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.772 [2024-11-16 16:41:00.553355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.772 [2024-11-16 16:41:00.553370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:56320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.773 [2024-11-16 16:41:00.553384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.773 [2024-11-16 16:41:00.553399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:56344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.773 [2024-11-16 16:41:00.553412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.773 [2024-11-16 16:41:00.553428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:56368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.773 [2024-11-16 16:41:00.553441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.773 [2024-11-16 16:41:00.553487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:56376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.773 [2024-11-16 16:41:00.553514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.773 [2024-11-16 16:41:00.553528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:56384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.773 [2024-11-16 16:41:00.553540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.773 [2024-11-16 16:41:00.553554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:56424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.773 [2024-11-16 16:41:00.553566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.773 [2024-11-16 16:41:00.553581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:56448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.773 [2024-11-16 16:41:00.553593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.773 [2024-11-16 16:41:00.553607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:56888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.773 [2024-11-16 16:41:00.553634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.773 [2024-11-16 16:41:00.553647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:56896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.773 [2024-11-16 16:41:00.553659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.773 [2024-11-16 16:41:00.553673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:56904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.773 [2024-11-16 16:41:00.553685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.773 [2024-11-16 16:41:00.553698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:56912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.773 [2024-11-16 16:41:00.553710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.773 [2024-11-16 16:41:00.553724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:56920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.773 [2024-11-16 16:41:00.553741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.773 [2024-11-16 16:41:00.553755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:56928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.773 [2024-11-16 16:41:00.553774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.773 [2024-11-16 16:41:00.553788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:56936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.773 [2024-11-16 16:41:00.553801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.773 [2024-11-16 16:41:00.553814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:56944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.773 [2024-11-16 16:41:00.553826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.773 [2024-11-16 16:41:00.553840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:56952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.773 [2024-11-16 16:41:00.553852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.773 [2024-11-16 16:41:00.553865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:56960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.773 [2024-11-16 16:41:00.553877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.773 [2024-11-16 16:41:00.553906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:56968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.773 [2024-11-16 16:41:00.553919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.773 [2024-11-16 16:41:00.553933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:56976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.773 [2024-11-16 16:41:00.553945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.773 [2024-11-16 16:41:00.553960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:56984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.773 [2024-11-16 16:41:00.553972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.773 [2024-11-16 16:41:00.553987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:56992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.773 [2024-11-16 16:41:00.553999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.773 [2024-11-16 16:41:00.554029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:57000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.773 [2024-11-16 16:41:00.554042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.773 [2024-11-16 16:41:00.554057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:57008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.773 [2024-11-16 16:41:00.554086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.773 [2024-11-16 16:41:00.554100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:57016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.773 [2024-11-16 16:41:00.554113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.773 [2024-11-16 16:41:00.554129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:57024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.773 [2024-11-16 16:41:00.554148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.773 [2024-11-16 16:41:00.554163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:57032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.773 [2024-11-16 16:41:00.554186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.773 [2024-11-16 16:41:00.554203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:57040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.773 [2024-11-16 16:41:00.554217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.773 [2024-11-16 16:41:00.554232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:57048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.773 [2024-11-16 16:41:00.554245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.773 [2024-11-16 16:41:00.554260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:57056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.773 [2024-11-16 16:41:00.554278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.773 [2024-11-16 16:41:00.554294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:56472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.773 [2024-11-16 16:41:00.554307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.773 [2024-11-16 16:41:00.554336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:56496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.773 [2024-11-16 16:41:00.554349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.773 [2024-11-16 16:41:00.554364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:56536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.773 [2024-11-16 16:41:00.554377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.773 [2024-11-16 16:41:00.554407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:56568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.773 [2024-11-16 16:41:00.554419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.773 [2024-11-16 16:41:00.554433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:56600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.773 [2024-11-16 16:41:00.554445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.773 [2024-11-16 16:41:00.554460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:56616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.773 [2024-11-16 16:41:00.554473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.773 [2024-11-16 16:41:00.554487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:56648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.773 [2024-11-16 16:41:00.554499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.773 [2024-11-16 16:41:00.554513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:56656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.774 [2024-11-16 16:41:00.554526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.774 [2024-11-16 16:41:00.554553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:57064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.774 [2024-11-16 16:41:00.554567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.774 [2024-11-16 16:41:00.554581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:57072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.774 [2024-11-16 16:41:00.554594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.774 [2024-11-16 16:41:00.554608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:57080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.774 [2024-11-16 16:41:00.554620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.774 [2024-11-16 16:41:00.554634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:57088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.774 [2024-11-16 16:41:00.554647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.774 [2024-11-16 16:41:00.554661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:57096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.774 [2024-11-16 16:41:00.554673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.774 [2024-11-16 16:41:00.554687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:57104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.774 [2024-11-16 16:41:00.554700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.774 [2024-11-16 16:41:00.554714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:57112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.774 [2024-11-16 16:41:00.554727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.774 [2024-11-16 16:41:00.554741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:57120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.774 [2024-11-16 16:41:00.554758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.774 [2024-11-16 16:41:00.554772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:57128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.774 [2024-11-16 16:41:00.554785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.774 [2024-11-16 16:41:00.554799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:57136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.774 [2024-11-16 16:41:00.554811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.774 [2024-11-16 16:41:00.554825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:57144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.774 [2024-11-16 16:41:00.554838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.774 [2024-11-16 16:41:00.554851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:57152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.774 [2024-11-16 16:41:00.554864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.774 [2024-11-16 16:41:00.554878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:57160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.774 [2024-11-16 16:41:00.554896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.774 [2024-11-16 16:41:00.554916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:57168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.774 [2024-11-16 16:41:00.554929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.774 [2024-11-16 16:41:00.554943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:56664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.774 [2024-11-16 16:41:00.554956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.774 [2024-11-16 16:41:00.554970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:56672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.774 [2024-11-16 16:41:00.554982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.774 [2024-11-16 16:41:00.554997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:56696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.774 [2024-11-16 16:41:00.555009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.774 [2024-11-16 16:41:00.555023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:56720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.774 [2024-11-16 16:41:00.555036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.774 [2024-11-16 16:41:00.555050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:56784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.774 [2024-11-16 16:41:00.555080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.774 [2024-11-16 16:41:00.555095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:56808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.774 [2024-11-16 16:41:00.555118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.774 [2024-11-16 16:41:00.555133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:56816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.774 [2024-11-16 16:41:00.555146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.774 [2024-11-16 16:41:00.555160] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba8210 is same with the state(5) to be set 00:21:29.774 [2024-11-16 16:41:00.555176] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:29.774 [2024-11-16 16:41:00.555186] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:29.774 [2024-11-16 16:41:00.555197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:56824 len:8 PRP1 0x0 PRP2 0x0 00:21:29.774 [2024-11-16 16:41:00.555209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.774 [2024-11-16 16:41:00.555269] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xba8210 was disconnected and freed. reset controller. 00:21:29.774 [2024-11-16 16:41:00.555286] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:21:29.774 [2024-11-16 16:41:00.555339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:29.774 [2024-11-16 16:41:00.555360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.774 [2024-11-16 16:41:00.555384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:29.774 [2024-11-16 16:41:00.555413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.774 [2024-11-16 16:41:00.555426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:29.774 [2024-11-16 16:41:00.555438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.774 [2024-11-16 16:41:00.555451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:29.774 [2024-11-16 16:41:00.555463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:29.774 [2024-11-16 16:41:00.555476] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:29.774 [2024-11-16 16:41:00.557703] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:29.774 [2024-11-16 16:41:00.557740] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb21cb0 (9): Bad file descriptor 00:21:29.774 [2024-11-16 16:41:00.592204] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:29.774 00:21:29.774 Latency(us) 00:21:29.774 [2024-11-16T16:41:07.265Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:29.774 [2024-11-16T16:41:07.265Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:29.774 Verification LBA range: start 0x0 length 0x4000 00:21:29.774 NVMe0n1 : 15.00 15029.79 58.71 331.43 0.00 8317.68 525.03 15192.44 00:21:29.774 [2024-11-16T16:41:07.265Z] =================================================================================================================== 00:21:29.774 [2024-11-16T16:41:07.265Z] Total : 15029.79 58.71 331.43 0.00 8317.68 525.03 15192.44 00:21:29.774 Received shutdown signal, test time was about 15.000000 seconds 00:21:29.774 00:21:29.774 Latency(us) 00:21:29.774 [2024-11-16T16:41:07.265Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:29.774 [2024-11-16T16:41:07.265Z] =================================================================================================================== 00:21:29.774 [2024-11-16T16:41:07.265Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:29.774 16:41:06 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:21:29.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:29.774 16:41:06 -- host/failover.sh@65 -- # count=3 00:21:29.774 16:41:06 -- host/failover.sh@67 -- # (( count != 3 )) 00:21:29.774 16:41:06 -- host/failover.sh@73 -- # bdevperf_pid=95967 00:21:29.774 16:41:06 -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:21:29.774 16:41:06 -- host/failover.sh@75 -- # waitforlisten 95967 /var/tmp/bdevperf.sock 00:21:29.774 16:41:06 -- common/autotest_common.sh@829 -- # '[' -z 95967 ']' 00:21:29.774 16:41:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:29.774 16:41:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:29.774 16:41:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:29.774 16:41:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:29.774 16:41:06 -- common/autotest_common.sh@10 -- # set +x 00:21:30.342 16:41:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:30.342 16:41:07 -- common/autotest_common.sh@862 -- # return 0 00:21:30.342 16:41:07 -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:30.342 [2024-11-16 16:41:07.754933] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:30.342 16:41:07 -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:30.601 [2024-11-16 16:41:07.963241] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:30.601 16:41:07 -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:30.860 NVMe0n1 00:21:30.860 16:41:08 -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:31.427 00:21:31.427 16:41:08 -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:31.685 00:21:31.685 16:41:08 -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:31.685 16:41:08 -- host/failover.sh@82 -- # grep -q NVMe0 00:21:31.685 16:41:09 -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:31.944 16:41:09 -- host/failover.sh@87 -- # sleep 3 00:21:35.232 16:41:12 -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:35.232 16:41:12 -- host/failover.sh@88 -- # grep -q NVMe0 00:21:35.232 16:41:12 -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:35.232 16:41:12 -- host/failover.sh@90 -- # run_test_pid=96107 00:21:35.232 16:41:12 -- host/failover.sh@92 -- # wait 96107 00:21:36.610 0 00:21:36.610 16:41:13 -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:36.610 [2024-11-16 16:41:06.618248] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:36.610 [2024-11-16 16:41:06.618351] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95967 ] 00:21:36.610 [2024-11-16 16:41:06.755334] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:36.610 [2024-11-16 16:41:06.822557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:36.610 [2024-11-16 16:41:09.361512] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:36.610 [2024-11-16 16:41:09.361640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.610 [2024-11-16 16:41:09.361677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.610 [2024-11-16 16:41:09.361703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.610 [2024-11-16 16:41:09.361716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.610 [2024-11-16 16:41:09.361728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.610 [2024-11-16 16:41:09.361740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.610 [2024-11-16 16:41:09.361752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.610 [2024-11-16 16:41:09.361774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.610 [2024-11-16 16:41:09.361786] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.610 [2024-11-16 16:41:09.361831] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.610 [2024-11-16 16:41:09.361859] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5dcb0 (9): Bad file descriptor 00:21:36.610 [2024-11-16 16:41:09.372930] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:36.610 Running I/O for 1 seconds... 00:21:36.610 00:21:36.610 Latency(us) 00:21:36.610 [2024-11-16T16:41:14.101Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:36.610 [2024-11-16T16:41:14.101Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:36.610 Verification LBA range: start 0x0 length 0x4000 00:21:36.610 NVMe0n1 : 1.00 15772.36 61.61 0.00 0.00 8082.90 1124.54 9472.93 00:21:36.610 [2024-11-16T16:41:14.101Z] =================================================================================================================== 00:21:36.610 [2024-11-16T16:41:14.101Z] Total : 15772.36 61.61 0.00 0.00 8082.90 1124.54 9472.93 00:21:36.610 16:41:13 -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:36.610 16:41:13 -- host/failover.sh@95 -- # grep -q NVMe0 00:21:36.610 16:41:14 -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:36.869 16:41:14 -- host/failover.sh@99 -- # grep -q NVMe0 00:21:36.869 16:41:14 -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:37.128 16:41:14 -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:37.386 16:41:14 -- host/failover.sh@101 -- # sleep 3 00:21:40.673 16:41:17 -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:40.673 16:41:17 -- host/failover.sh@103 -- # grep -q NVMe0 00:21:40.673 16:41:17 -- host/failover.sh@108 -- # killprocess 95967 00:21:40.673 16:41:17 -- common/autotest_common.sh@936 -- # '[' -z 95967 ']' 00:21:40.673 16:41:17 -- common/autotest_common.sh@940 -- # kill -0 95967 00:21:40.673 16:41:17 -- common/autotest_common.sh@941 -- # uname 00:21:40.673 16:41:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:40.673 16:41:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 95967 00:21:40.673 16:41:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:40.673 16:41:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:40.673 killing process with pid 95967 00:21:40.673 16:41:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 95967' 00:21:40.673 16:41:18 -- common/autotest_common.sh@955 -- # kill 95967 00:21:40.673 16:41:18 -- common/autotest_common.sh@960 -- # wait 95967 00:21:40.932 16:41:18 -- host/failover.sh@110 -- # sync 00:21:40.932 16:41:18 -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:41.191 16:41:18 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:21:41.191 16:41:18 -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:41.191 16:41:18 -- host/failover.sh@116 -- # nvmftestfini 00:21:41.191 16:41:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:41.191 16:41:18 -- nvmf/common.sh@116 -- # sync 00:21:41.191 16:41:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:41.191 16:41:18 -- nvmf/common.sh@119 -- # set +e 00:21:41.191 16:41:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:41.191 16:41:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:41.191 rmmod nvme_tcp 00:21:41.191 rmmod nvme_fabrics 00:21:41.191 rmmod nvme_keyring 00:21:41.191 16:41:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:41.191 16:41:18 -- nvmf/common.sh@123 -- # set -e 00:21:41.191 16:41:18 -- nvmf/common.sh@124 -- # return 0 00:21:41.191 16:41:18 -- nvmf/common.sh@477 -- # '[' -n 95605 ']' 00:21:41.191 16:41:18 -- nvmf/common.sh@478 -- # killprocess 95605 00:21:41.191 16:41:18 -- common/autotest_common.sh@936 -- # '[' -z 95605 ']' 00:21:41.191 16:41:18 -- common/autotest_common.sh@940 -- # kill -0 95605 00:21:41.191 16:41:18 -- common/autotest_common.sh@941 -- # uname 00:21:41.191 16:41:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:41.191 16:41:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 95605 00:21:41.191 killing process with pid 95605 00:21:41.191 16:41:18 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:41.191 16:41:18 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:41.191 16:41:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 95605' 00:21:41.191 16:41:18 -- common/autotest_common.sh@955 -- # kill 95605 00:21:41.191 16:41:18 -- common/autotest_common.sh@960 -- # wait 95605 00:21:41.450 16:41:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:41.450 16:41:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:41.450 16:41:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:41.450 16:41:18 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:41.450 16:41:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:41.450 16:41:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:41.450 16:41:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:41.450 16:41:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:41.450 16:41:18 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:41.450 ************************************ 00:21:41.450 END TEST nvmf_failover 00:21:41.450 ************************************ 00:21:41.450 00:21:41.450 real 0m32.516s 00:21:41.450 user 2m5.733s 00:21:41.450 sys 0m4.818s 00:21:41.450 16:41:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:41.450 16:41:18 -- common/autotest_common.sh@10 -- # set +x 00:21:41.450 16:41:18 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:41.450 16:41:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:41.450 16:41:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:41.450 16:41:18 -- common/autotest_common.sh@10 -- # set +x 00:21:41.450 ************************************ 00:21:41.450 START TEST nvmf_discovery 00:21:41.450 ************************************ 00:21:41.450 16:41:18 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:41.709 * Looking for test storage... 00:21:41.709 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:41.709 16:41:18 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:21:41.710 16:41:18 -- common/autotest_common.sh@1690 -- # lcov --version 00:21:41.710 16:41:18 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:21:41.710 16:41:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:21:41.710 16:41:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:21:41.710 16:41:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:41.710 16:41:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:41.710 16:41:19 -- scripts/common.sh@335 -- # IFS=.-: 00:21:41.710 16:41:19 -- scripts/common.sh@335 -- # read -ra ver1 00:21:41.710 16:41:19 -- scripts/common.sh@336 -- # IFS=.-: 00:21:41.710 16:41:19 -- scripts/common.sh@336 -- # read -ra ver2 00:21:41.710 16:41:19 -- scripts/common.sh@337 -- # local 'op=<' 00:21:41.710 16:41:19 -- scripts/common.sh@339 -- # ver1_l=2 00:21:41.710 16:41:19 -- scripts/common.sh@340 -- # ver2_l=1 00:21:41.710 16:41:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:41.710 16:41:19 -- scripts/common.sh@343 -- # case "$op" in 00:21:41.710 16:41:19 -- scripts/common.sh@344 -- # : 1 00:21:41.710 16:41:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:41.710 16:41:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:41.710 16:41:19 -- scripts/common.sh@364 -- # decimal 1 00:21:41.710 16:41:19 -- scripts/common.sh@352 -- # local d=1 00:21:41.710 16:41:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:41.710 16:41:19 -- scripts/common.sh@354 -- # echo 1 00:21:41.710 16:41:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:21:41.710 16:41:19 -- scripts/common.sh@365 -- # decimal 2 00:21:41.710 16:41:19 -- scripts/common.sh@352 -- # local d=2 00:21:41.710 16:41:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:41.710 16:41:19 -- scripts/common.sh@354 -- # echo 2 00:21:41.710 16:41:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:21:41.710 16:41:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:41.710 16:41:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:41.710 16:41:19 -- scripts/common.sh@367 -- # return 0 00:21:41.710 16:41:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:41.710 16:41:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:21:41.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:41.710 --rc genhtml_branch_coverage=1 00:21:41.710 --rc genhtml_function_coverage=1 00:21:41.710 --rc genhtml_legend=1 00:21:41.710 --rc geninfo_all_blocks=1 00:21:41.710 --rc geninfo_unexecuted_blocks=1 00:21:41.710 00:21:41.710 ' 00:21:41.710 16:41:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:21:41.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:41.710 --rc genhtml_branch_coverage=1 00:21:41.710 --rc genhtml_function_coverage=1 00:21:41.710 --rc genhtml_legend=1 00:21:41.710 --rc geninfo_all_blocks=1 00:21:41.710 --rc geninfo_unexecuted_blocks=1 00:21:41.710 00:21:41.710 ' 00:21:41.710 16:41:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:21:41.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:41.710 --rc genhtml_branch_coverage=1 00:21:41.710 --rc genhtml_function_coverage=1 00:21:41.710 --rc genhtml_legend=1 00:21:41.710 --rc geninfo_all_blocks=1 00:21:41.710 --rc geninfo_unexecuted_blocks=1 00:21:41.710 00:21:41.710 ' 00:21:41.710 16:41:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:21:41.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:41.710 --rc genhtml_branch_coverage=1 00:21:41.710 --rc genhtml_function_coverage=1 00:21:41.710 --rc genhtml_legend=1 00:21:41.710 --rc geninfo_all_blocks=1 00:21:41.710 --rc geninfo_unexecuted_blocks=1 00:21:41.710 00:21:41.710 ' 00:21:41.710 16:41:19 -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:41.710 16:41:19 -- nvmf/common.sh@7 -- # uname -s 00:21:41.710 16:41:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:41.710 16:41:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:41.710 16:41:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:41.710 16:41:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:41.710 16:41:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:41.710 16:41:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:41.710 16:41:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:41.710 16:41:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:41.710 16:41:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:41.710 16:41:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:41.710 16:41:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:21:41.710 16:41:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:21:41.710 16:41:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:41.710 16:41:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:41.710 16:41:19 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:41.710 16:41:19 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:41.710 16:41:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:41.710 16:41:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:41.710 16:41:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:41.710 16:41:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.710 16:41:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.710 16:41:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.710 16:41:19 -- paths/export.sh@5 -- # export PATH 00:21:41.710 16:41:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.710 16:41:19 -- nvmf/common.sh@46 -- # : 0 00:21:41.710 16:41:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:41.710 16:41:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:41.710 16:41:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:41.710 16:41:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:41.710 16:41:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:41.710 16:41:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:41.710 16:41:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:41.710 16:41:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:41.710 16:41:19 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:21:41.710 16:41:19 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:21:41.710 16:41:19 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:21:41.710 16:41:19 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:21:41.710 16:41:19 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:21:41.710 16:41:19 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:21:41.710 16:41:19 -- host/discovery.sh@25 -- # nvmftestinit 00:21:41.710 16:41:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:41.710 16:41:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:41.710 16:41:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:41.710 16:41:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:41.710 16:41:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:41.710 16:41:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:41.710 16:41:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:41.710 16:41:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:41.710 16:41:19 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:41.710 16:41:19 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:41.710 16:41:19 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:41.710 16:41:19 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:41.710 16:41:19 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:41.710 16:41:19 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:41.710 16:41:19 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:41.711 16:41:19 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:41.711 16:41:19 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:41.711 16:41:19 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:41.711 16:41:19 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:41.711 16:41:19 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:41.711 16:41:19 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:41.711 16:41:19 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:41.711 16:41:19 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:41.711 16:41:19 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:41.711 16:41:19 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:41.711 16:41:19 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:41.711 16:41:19 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:41.711 16:41:19 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:41.711 Cannot find device "nvmf_tgt_br" 00:21:41.711 16:41:19 -- nvmf/common.sh@154 -- # true 00:21:41.711 16:41:19 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:41.711 Cannot find device "nvmf_tgt_br2" 00:21:41.711 16:41:19 -- nvmf/common.sh@155 -- # true 00:21:41.711 16:41:19 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:41.711 16:41:19 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:41.711 Cannot find device "nvmf_tgt_br" 00:21:41.711 16:41:19 -- nvmf/common.sh@157 -- # true 00:21:41.711 16:41:19 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:41.711 Cannot find device "nvmf_tgt_br2" 00:21:41.711 16:41:19 -- nvmf/common.sh@158 -- # true 00:21:41.711 16:41:19 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:21:41.711 16:41:19 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:21:41.971 16:41:19 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:41.971 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:41.971 16:41:19 -- nvmf/common.sh@161 -- # true 00:21:41.971 16:41:19 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:41.971 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:41.971 16:41:19 -- nvmf/common.sh@162 -- # true 00:21:41.971 16:41:19 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:21:41.971 16:41:19 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:41.971 16:41:19 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:41.971 16:41:19 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:41.971 16:41:19 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:41.971 16:41:19 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:41.971 16:41:19 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:41.971 16:41:19 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:41.971 16:41:19 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:41.971 16:41:19 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:41.971 16:41:19 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:41.971 16:41:19 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:41.971 16:41:19 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:41.971 16:41:19 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:41.971 16:41:19 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:41.971 16:41:19 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:41.971 16:41:19 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:41.971 16:41:19 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:41.971 16:41:19 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:41.971 16:41:19 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:41.971 16:41:19 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:41.971 16:41:19 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:41.971 16:41:19 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:41.971 16:41:19 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:21:41.971 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:41.971 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:21:41.971 00:21:41.971 --- 10.0.0.2 ping statistics --- 00:21:41.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:41.971 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:21:41.971 16:41:19 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:21:41.971 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:41.971 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.031 ms 00:21:41.971 00:21:41.971 --- 10.0.0.3 ping statistics --- 00:21:41.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:41.971 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:21:41.971 16:41:19 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:41.971 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:41.971 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:21:41.971 00:21:41.971 --- 10.0.0.1 ping statistics --- 00:21:41.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:41.971 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:21:41.971 16:41:19 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:41.971 16:41:19 -- nvmf/common.sh@421 -- # return 0 00:21:41.971 16:41:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:41.971 16:41:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:41.971 16:41:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:41.971 16:41:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:41.971 16:41:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:41.971 16:41:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:41.971 16:41:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:41.971 16:41:19 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:21:41.971 16:41:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:41.971 16:41:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:41.971 16:41:19 -- common/autotest_common.sh@10 -- # set +x 00:21:41.971 16:41:19 -- nvmf/common.sh@469 -- # nvmfpid=96408 00:21:41.971 16:41:19 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:41.971 16:41:19 -- nvmf/common.sh@470 -- # waitforlisten 96408 00:21:41.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:41.971 16:41:19 -- common/autotest_common.sh@829 -- # '[' -z 96408 ']' 00:21:41.971 16:41:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:41.971 16:41:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:41.971 16:41:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:41.971 16:41:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:41.971 16:41:19 -- common/autotest_common.sh@10 -- # set +x 00:21:42.229 [2024-11-16 16:41:19.502471] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:42.229 [2024-11-16 16:41:19.502561] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:42.229 [2024-11-16 16:41:19.641720] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:42.487 [2024-11-16 16:41:19.728322] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:42.487 [2024-11-16 16:41:19.728688] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:42.487 [2024-11-16 16:41:19.728717] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:42.487 [2024-11-16 16:41:19.728729] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:42.487 [2024-11-16 16:41:19.728768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:43.054 16:41:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:43.054 16:41:20 -- common/autotest_common.sh@862 -- # return 0 00:21:43.054 16:41:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:43.054 16:41:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:43.054 16:41:20 -- common/autotest_common.sh@10 -- # set +x 00:21:43.054 16:41:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:43.054 16:41:20 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:43.054 16:41:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.054 16:41:20 -- common/autotest_common.sh@10 -- # set +x 00:21:43.313 [2024-11-16 16:41:20.548160] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:43.313 16:41:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.313 16:41:20 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:21:43.313 16:41:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.313 16:41:20 -- common/autotest_common.sh@10 -- # set +x 00:21:43.313 [2024-11-16 16:41:20.556267] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:21:43.313 16:41:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.313 16:41:20 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:21:43.313 16:41:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.313 16:41:20 -- common/autotest_common.sh@10 -- # set +x 00:21:43.313 null0 00:21:43.313 16:41:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.313 16:41:20 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:21:43.313 16:41:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.313 16:41:20 -- common/autotest_common.sh@10 -- # set +x 00:21:43.313 null1 00:21:43.313 16:41:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.313 16:41:20 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:21:43.313 16:41:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.313 16:41:20 -- common/autotest_common.sh@10 -- # set +x 00:21:43.313 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:21:43.313 16:41:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.313 16:41:20 -- host/discovery.sh@45 -- # hostpid=96465 00:21:43.313 16:41:20 -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:21:43.313 16:41:20 -- host/discovery.sh@46 -- # waitforlisten 96465 /tmp/host.sock 00:21:43.313 16:41:20 -- common/autotest_common.sh@829 -- # '[' -z 96465 ']' 00:21:43.313 16:41:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:21:43.313 16:41:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:43.313 16:41:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:21:43.313 16:41:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:43.313 16:41:20 -- common/autotest_common.sh@10 -- # set +x 00:21:43.313 [2024-11-16 16:41:20.640205] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:43.313 [2024-11-16 16:41:20.640473] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96465 ] 00:21:43.313 [2024-11-16 16:41:20.784083] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:43.571 [2024-11-16 16:41:20.851685] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:43.571 [2024-11-16 16:41:20.852169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:44.137 16:41:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:44.137 16:41:21 -- common/autotest_common.sh@862 -- # return 0 00:21:44.137 16:41:21 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:44.137 16:41:21 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:21:44.137 16:41:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.137 16:41:21 -- common/autotest_common.sh@10 -- # set +x 00:21:44.137 16:41:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.137 16:41:21 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:21:44.137 16:41:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.137 16:41:21 -- common/autotest_common.sh@10 -- # set +x 00:21:44.395 16:41:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.395 16:41:21 -- host/discovery.sh@72 -- # notify_id=0 00:21:44.395 16:41:21 -- host/discovery.sh@78 -- # get_subsystem_names 00:21:44.395 16:41:21 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:44.395 16:41:21 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:44.395 16:41:21 -- host/discovery.sh@59 -- # xargs 00:21:44.395 16:41:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.395 16:41:21 -- host/discovery.sh@59 -- # sort 00:21:44.395 16:41:21 -- common/autotest_common.sh@10 -- # set +x 00:21:44.395 16:41:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.395 16:41:21 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:21:44.395 16:41:21 -- host/discovery.sh@79 -- # get_bdev_list 00:21:44.395 16:41:21 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:44.395 16:41:21 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:44.395 16:41:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.395 16:41:21 -- common/autotest_common.sh@10 -- # set +x 00:21:44.395 16:41:21 -- host/discovery.sh@55 -- # xargs 00:21:44.395 16:41:21 -- host/discovery.sh@55 -- # sort 00:21:44.395 16:41:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.395 16:41:21 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:21:44.395 16:41:21 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:21:44.395 16:41:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.395 16:41:21 -- common/autotest_common.sh@10 -- # set +x 00:21:44.395 16:41:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.395 16:41:21 -- host/discovery.sh@82 -- # get_subsystem_names 00:21:44.395 16:41:21 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:44.395 16:41:21 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:44.395 16:41:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.395 16:41:21 -- common/autotest_common.sh@10 -- # set +x 00:21:44.395 16:41:21 -- host/discovery.sh@59 -- # xargs 00:21:44.395 16:41:21 -- host/discovery.sh@59 -- # sort 00:21:44.395 16:41:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.395 16:41:21 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:21:44.395 16:41:21 -- host/discovery.sh@83 -- # get_bdev_list 00:21:44.395 16:41:21 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:44.395 16:41:21 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:44.395 16:41:21 -- host/discovery.sh@55 -- # sort 00:21:44.395 16:41:21 -- host/discovery.sh@55 -- # xargs 00:21:44.395 16:41:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.395 16:41:21 -- common/autotest_common.sh@10 -- # set +x 00:21:44.395 16:41:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.395 16:41:21 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:21:44.395 16:41:21 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:21:44.396 16:41:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.396 16:41:21 -- common/autotest_common.sh@10 -- # set +x 00:21:44.396 16:41:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.396 16:41:21 -- host/discovery.sh@86 -- # get_subsystem_names 00:21:44.396 16:41:21 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:44.396 16:41:21 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:44.396 16:41:21 -- host/discovery.sh@59 -- # sort 00:21:44.396 16:41:21 -- host/discovery.sh@59 -- # xargs 00:21:44.396 16:41:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.396 16:41:21 -- common/autotest_common.sh@10 -- # set +x 00:21:44.654 16:41:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.654 16:41:21 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:21:44.654 16:41:21 -- host/discovery.sh@87 -- # get_bdev_list 00:21:44.654 16:41:21 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:44.654 16:41:21 -- host/discovery.sh@55 -- # sort 00:21:44.654 16:41:21 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:44.654 16:41:21 -- host/discovery.sh@55 -- # xargs 00:21:44.654 16:41:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.654 16:41:21 -- common/autotest_common.sh@10 -- # set +x 00:21:44.654 16:41:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.654 16:41:21 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:21:44.654 16:41:21 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:44.654 16:41:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.654 16:41:21 -- common/autotest_common.sh@10 -- # set +x 00:21:44.654 [2024-11-16 16:41:21.988577] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:44.654 16:41:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.654 16:41:21 -- host/discovery.sh@92 -- # get_subsystem_names 00:21:44.654 16:41:21 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:44.654 16:41:21 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:44.654 16:41:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.654 16:41:21 -- common/autotest_common.sh@10 -- # set +x 00:21:44.654 16:41:21 -- host/discovery.sh@59 -- # sort 00:21:44.654 16:41:21 -- host/discovery.sh@59 -- # xargs 00:21:44.654 16:41:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.654 16:41:22 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:21:44.654 16:41:22 -- host/discovery.sh@93 -- # get_bdev_list 00:21:44.654 16:41:22 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:44.654 16:41:22 -- host/discovery.sh@55 -- # sort 00:21:44.654 16:41:22 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:44.654 16:41:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.654 16:41:22 -- common/autotest_common.sh@10 -- # set +x 00:21:44.654 16:41:22 -- host/discovery.sh@55 -- # xargs 00:21:44.654 16:41:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.654 16:41:22 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:21:44.654 16:41:22 -- host/discovery.sh@94 -- # get_notification_count 00:21:44.654 16:41:22 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:44.654 16:41:22 -- host/discovery.sh@74 -- # jq '. | length' 00:21:44.654 16:41:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.654 16:41:22 -- common/autotest_common.sh@10 -- # set +x 00:21:44.654 16:41:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.912 16:41:22 -- host/discovery.sh@74 -- # notification_count=0 00:21:44.912 16:41:22 -- host/discovery.sh@75 -- # notify_id=0 00:21:44.912 16:41:22 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:21:44.912 16:41:22 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:21:44.912 16:41:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.912 16:41:22 -- common/autotest_common.sh@10 -- # set +x 00:21:44.912 16:41:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.912 16:41:22 -- host/discovery.sh@100 -- # sleep 1 00:21:45.171 [2024-11-16 16:41:22.641401] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:45.171 [2024-11-16 16:41:22.641439] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:45.171 [2024-11-16 16:41:22.641460] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:45.429 [2024-11-16 16:41:22.727513] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:21:45.429 [2024-11-16 16:41:22.783255] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:45.429 [2024-11-16 16:41:22.783285] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:45.687 16:41:23 -- host/discovery.sh@101 -- # get_subsystem_names 00:21:45.687 16:41:23 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:45.687 16:41:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.687 16:41:23 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:45.687 16:41:23 -- common/autotest_common.sh@10 -- # set +x 00:21:45.687 16:41:23 -- host/discovery.sh@59 -- # sort 00:21:45.688 16:41:23 -- host/discovery.sh@59 -- # xargs 00:21:45.947 16:41:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.947 16:41:23 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.947 16:41:23 -- host/discovery.sh@102 -- # get_bdev_list 00:21:45.947 16:41:23 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:45.947 16:41:23 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:45.947 16:41:23 -- host/discovery.sh@55 -- # sort 00:21:45.947 16:41:23 -- host/discovery.sh@55 -- # xargs 00:21:45.947 16:41:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.947 16:41:23 -- common/autotest_common.sh@10 -- # set +x 00:21:45.947 16:41:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.947 16:41:23 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:21:45.947 16:41:23 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:21:45.947 16:41:23 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:45.947 16:41:23 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:45.947 16:41:23 -- host/discovery.sh@63 -- # xargs 00:21:45.947 16:41:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.947 16:41:23 -- host/discovery.sh@63 -- # sort -n 00:21:45.947 16:41:23 -- common/autotest_common.sh@10 -- # set +x 00:21:45.947 16:41:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.947 16:41:23 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:21:45.947 16:41:23 -- host/discovery.sh@104 -- # get_notification_count 00:21:45.947 16:41:23 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:45.947 16:41:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.947 16:41:23 -- common/autotest_common.sh@10 -- # set +x 00:21:45.947 16:41:23 -- host/discovery.sh@74 -- # jq '. | length' 00:21:45.947 16:41:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.947 16:41:23 -- host/discovery.sh@74 -- # notification_count=1 00:21:45.947 16:41:23 -- host/discovery.sh@75 -- # notify_id=1 00:21:45.947 16:41:23 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:21:45.947 16:41:23 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:21:45.947 16:41:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.947 16:41:23 -- common/autotest_common.sh@10 -- # set +x 00:21:45.947 16:41:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.947 16:41:23 -- host/discovery.sh@109 -- # sleep 1 00:21:47.323 16:41:24 -- host/discovery.sh@110 -- # get_bdev_list 00:21:47.323 16:41:24 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:47.323 16:41:24 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:47.323 16:41:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.323 16:41:24 -- common/autotest_common.sh@10 -- # set +x 00:21:47.323 16:41:24 -- host/discovery.sh@55 -- # sort 00:21:47.323 16:41:24 -- host/discovery.sh@55 -- # xargs 00:21:47.323 16:41:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.323 16:41:24 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:47.323 16:41:24 -- host/discovery.sh@111 -- # get_notification_count 00:21:47.323 16:41:24 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:21:47.323 16:41:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.323 16:41:24 -- common/autotest_common.sh@10 -- # set +x 00:21:47.323 16:41:24 -- host/discovery.sh@74 -- # jq '. | length' 00:21:47.323 16:41:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.323 16:41:24 -- host/discovery.sh@74 -- # notification_count=1 00:21:47.323 16:41:24 -- host/discovery.sh@75 -- # notify_id=2 00:21:47.323 16:41:24 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:21:47.323 16:41:24 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:21:47.323 16:41:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.323 16:41:24 -- common/autotest_common.sh@10 -- # set +x 00:21:47.323 [2024-11-16 16:41:24.518034] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:47.323 [2024-11-16 16:41:24.518442] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:47.323 [2024-11-16 16:41:24.518478] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:47.323 16:41:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.323 16:41:24 -- host/discovery.sh@117 -- # sleep 1 00:21:47.323 [2024-11-16 16:41:24.604530] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:21:47.323 [2024-11-16 16:41:24.663729] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:47.323 [2024-11-16 16:41:24.663756] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:47.323 [2024-11-16 16:41:24.663763] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:48.260 16:41:25 -- host/discovery.sh@118 -- # get_subsystem_names 00:21:48.260 16:41:25 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:48.260 16:41:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.260 16:41:25 -- common/autotest_common.sh@10 -- # set +x 00:21:48.260 16:41:25 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:48.260 16:41:25 -- host/discovery.sh@59 -- # sort 00:21:48.260 16:41:25 -- host/discovery.sh@59 -- # xargs 00:21:48.260 16:41:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.260 16:41:25 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:48.260 16:41:25 -- host/discovery.sh@119 -- # get_bdev_list 00:21:48.260 16:41:25 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:48.260 16:41:25 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:48.260 16:41:25 -- host/discovery.sh@55 -- # sort 00:21:48.260 16:41:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.260 16:41:25 -- common/autotest_common.sh@10 -- # set +x 00:21:48.260 16:41:25 -- host/discovery.sh@55 -- # xargs 00:21:48.260 16:41:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.260 16:41:25 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:48.260 16:41:25 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:21:48.260 16:41:25 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:48.260 16:41:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.260 16:41:25 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:48.260 16:41:25 -- common/autotest_common.sh@10 -- # set +x 00:21:48.260 16:41:25 -- host/discovery.sh@63 -- # sort -n 00:21:48.260 16:41:25 -- host/discovery.sh@63 -- # xargs 00:21:48.260 16:41:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.260 16:41:25 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:21:48.260 16:41:25 -- host/discovery.sh@121 -- # get_notification_count 00:21:48.260 16:41:25 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:48.260 16:41:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.260 16:41:25 -- host/discovery.sh@74 -- # jq '. | length' 00:21:48.260 16:41:25 -- common/autotest_common.sh@10 -- # set +x 00:21:48.260 16:41:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.260 16:41:25 -- host/discovery.sh@74 -- # notification_count=0 00:21:48.260 16:41:25 -- host/discovery.sh@75 -- # notify_id=2 00:21:48.260 16:41:25 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:21:48.260 16:41:25 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:48.260 16:41:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.260 16:41:25 -- common/autotest_common.sh@10 -- # set +x 00:21:48.260 [2024-11-16 16:41:25.746863] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:48.260 [2024-11-16 16:41:25.746913] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:48.520 [2024-11-16 16:41:25.751000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:48.520 [2024-11-16 16:41:25.751037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.520 [2024-11-16 16:41:25.751050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:48.520 [2024-11-16 16:41:25.751093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.520 [2024-11-16 16:41:25.751105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:48.520 [2024-11-16 16:41:25.751114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.520 [2024-11-16 16:41:25.751124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:48.520 [2024-11-16 16:41:25.751133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.520 [2024-11-16 16:41:25.751143] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x172d570 is same with the state(5) to be set 00:21:48.520 16:41:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.520 16:41:25 -- host/discovery.sh@127 -- # sleep 1 00:21:48.520 [2024-11-16 16:41:25.760956] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x172d570 (9): Bad file descriptor 00:21:48.520 [2024-11-16 16:41:25.770976] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:48.520 [2024-11-16 16:41:25.771093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.520 [2024-11-16 16:41:25.771148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.520 [2024-11-16 16:41:25.771167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x172d570 with addr=10.0.0.2, port=4420 00:21:48.520 [2024-11-16 16:41:25.771178] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x172d570 is same with the state(5) to be set 00:21:48.520 [2024-11-16 16:41:25.771194] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x172d570 (9): Bad file descriptor 00:21:48.520 [2024-11-16 16:41:25.771258] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:48.521 [2024-11-16 16:41:25.771272] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:48.521 [2024-11-16 16:41:25.771284] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:48.521 [2024-11-16 16:41:25.771301] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.521 [2024-11-16 16:41:25.781034] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:48.521 [2024-11-16 16:41:25.781166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.521 [2024-11-16 16:41:25.781254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.521 [2024-11-16 16:41:25.781275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x172d570 with addr=10.0.0.2, port=4420 00:21:48.521 [2024-11-16 16:41:25.781286] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x172d570 is same with the state(5) to be set 00:21:48.521 [2024-11-16 16:41:25.781304] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x172d570 (9): Bad file descriptor 00:21:48.521 [2024-11-16 16:41:25.781321] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:48.521 [2024-11-16 16:41:25.781331] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:48.521 [2024-11-16 16:41:25.781357] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:48.521 [2024-11-16 16:41:25.781374] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.521 [2024-11-16 16:41:25.791134] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:48.521 [2024-11-16 16:41:25.791223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.521 [2024-11-16 16:41:25.791273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.521 [2024-11-16 16:41:25.791291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x172d570 with addr=10.0.0.2, port=4420 00:21:48.521 [2024-11-16 16:41:25.791302] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x172d570 is same with the state(5) to be set 00:21:48.521 [2024-11-16 16:41:25.791318] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x172d570 (9): Bad file descriptor 00:21:48.521 [2024-11-16 16:41:25.791344] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:48.521 [2024-11-16 16:41:25.791355] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:48.521 [2024-11-16 16:41:25.791364] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:48.521 [2024-11-16 16:41:25.791394] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.521 [2024-11-16 16:41:25.801212] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:48.521 [2024-11-16 16:41:25.801304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.521 [2024-11-16 16:41:25.801355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.521 [2024-11-16 16:41:25.801375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x172d570 with addr=10.0.0.2, port=4420 00:21:48.521 [2024-11-16 16:41:25.801386] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x172d570 is same with the state(5) to be set 00:21:48.521 [2024-11-16 16:41:25.801403] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x172d570 (9): Bad file descriptor 00:21:48.521 [2024-11-16 16:41:25.801435] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:48.521 [2024-11-16 16:41:25.801445] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:48.521 [2024-11-16 16:41:25.801454] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:48.521 [2024-11-16 16:41:25.801470] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.521 [2024-11-16 16:41:25.811269] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:48.521 [2024-11-16 16:41:25.811353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.521 [2024-11-16 16:41:25.811401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.521 [2024-11-16 16:41:25.811420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x172d570 with addr=10.0.0.2, port=4420 00:21:48.521 [2024-11-16 16:41:25.811430] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x172d570 is same with the state(5) to be set 00:21:48.521 [2024-11-16 16:41:25.811446] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x172d570 (9): Bad file descriptor 00:21:48.521 [2024-11-16 16:41:25.811471] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:48.521 [2024-11-16 16:41:25.811483] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:48.521 [2024-11-16 16:41:25.811492] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:48.521 [2024-11-16 16:41:25.811507] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.521 [2024-11-16 16:41:25.821322] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:48.521 [2024-11-16 16:41:25.821403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.521 [2024-11-16 16:41:25.821450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.521 [2024-11-16 16:41:25.821469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x172d570 with addr=10.0.0.2, port=4420 00:21:48.521 [2024-11-16 16:41:25.821479] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x172d570 is same with the state(5) to be set 00:21:48.521 [2024-11-16 16:41:25.821494] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x172d570 (9): Bad file descriptor 00:21:48.521 [2024-11-16 16:41:25.821508] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:48.521 [2024-11-16 16:41:25.821534] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:48.521 [2024-11-16 16:41:25.821545] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:48.521 [2024-11-16 16:41:25.821575] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.521 [2024-11-16 16:41:25.831373] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:48.521 [2024-11-16 16:41:25.831472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.521 [2024-11-16 16:41:25.831522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.521 [2024-11-16 16:41:25.831541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x172d570 with addr=10.0.0.2, port=4420 00:21:48.521 [2024-11-16 16:41:25.831552] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x172d570 is same with the state(5) to be set 00:21:48.521 [2024-11-16 16:41:25.831568] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x172d570 (9): Bad file descriptor 00:21:48.521 [2024-11-16 16:41:25.831595] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:48.521 [2024-11-16 16:41:25.831607] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:48.521 [2024-11-16 16:41:25.831633] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:48.521 [2024-11-16 16:41:25.831649] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.521 [2024-11-16 16:41:25.832942] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:21:48.521 [2024-11-16 16:41:25.832972] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:49.457 16:41:26 -- host/discovery.sh@128 -- # get_subsystem_names 00:21:49.457 16:41:26 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:49.457 16:41:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.457 16:41:26 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:49.457 16:41:26 -- common/autotest_common.sh@10 -- # set +x 00:21:49.457 16:41:26 -- host/discovery.sh@59 -- # sort 00:21:49.457 16:41:26 -- host/discovery.sh@59 -- # xargs 00:21:49.457 16:41:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.457 16:41:26 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.457 16:41:26 -- host/discovery.sh@129 -- # get_bdev_list 00:21:49.457 16:41:26 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:49.457 16:41:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.457 16:41:26 -- common/autotest_common.sh@10 -- # set +x 00:21:49.457 16:41:26 -- host/discovery.sh@55 -- # sort 00:21:49.457 16:41:26 -- host/discovery.sh@55 -- # xargs 00:21:49.457 16:41:26 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:49.457 16:41:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.457 16:41:26 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:49.457 16:41:26 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:21:49.457 16:41:26 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:49.457 16:41:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.457 16:41:26 -- common/autotest_common.sh@10 -- # set +x 00:21:49.457 16:41:26 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:49.457 16:41:26 -- host/discovery.sh@63 -- # sort -n 00:21:49.457 16:41:26 -- host/discovery.sh@63 -- # xargs 00:21:49.457 16:41:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.457 16:41:26 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:21:49.457 16:41:26 -- host/discovery.sh@131 -- # get_notification_count 00:21:49.457 16:41:26 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:49.457 16:41:26 -- host/discovery.sh@74 -- # jq '. | length' 00:21:49.457 16:41:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.457 16:41:26 -- common/autotest_common.sh@10 -- # set +x 00:21:49.457 16:41:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.716 16:41:26 -- host/discovery.sh@74 -- # notification_count=0 00:21:49.716 16:41:26 -- host/discovery.sh@75 -- # notify_id=2 00:21:49.716 16:41:26 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:21:49.716 16:41:26 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:21:49.716 16:41:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.716 16:41:26 -- common/autotest_common.sh@10 -- # set +x 00:21:49.716 16:41:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.716 16:41:26 -- host/discovery.sh@135 -- # sleep 1 00:21:50.651 16:41:27 -- host/discovery.sh@136 -- # get_subsystem_names 00:21:50.651 16:41:28 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:50.651 16:41:28 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:50.651 16:41:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.651 16:41:28 -- host/discovery.sh@59 -- # sort 00:21:50.651 16:41:28 -- common/autotest_common.sh@10 -- # set +x 00:21:50.651 16:41:28 -- host/discovery.sh@59 -- # xargs 00:21:50.651 16:41:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.651 16:41:28 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:21:50.651 16:41:28 -- host/discovery.sh@137 -- # get_bdev_list 00:21:50.651 16:41:28 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:50.651 16:41:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.651 16:41:28 -- common/autotest_common.sh@10 -- # set +x 00:21:50.651 16:41:28 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:50.651 16:41:28 -- host/discovery.sh@55 -- # xargs 00:21:50.651 16:41:28 -- host/discovery.sh@55 -- # sort 00:21:50.651 16:41:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.651 16:41:28 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:21:50.651 16:41:28 -- host/discovery.sh@138 -- # get_notification_count 00:21:50.651 16:41:28 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:50.651 16:41:28 -- host/discovery.sh@74 -- # jq '. | length' 00:21:50.651 16:41:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.651 16:41:28 -- common/autotest_common.sh@10 -- # set +x 00:21:50.651 16:41:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.909 16:41:28 -- host/discovery.sh@74 -- # notification_count=2 00:21:50.909 16:41:28 -- host/discovery.sh@75 -- # notify_id=4 00:21:50.909 16:41:28 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:21:50.909 16:41:28 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:50.909 16:41:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.909 16:41:28 -- common/autotest_common.sh@10 -- # set +x 00:21:51.844 [2024-11-16 16:41:29.182393] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:51.844 [2024-11-16 16:41:29.182418] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:51.844 [2024-11-16 16:41:29.182436] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:51.844 [2024-11-16 16:41:29.268479] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:21:51.844 [2024-11-16 16:41:29.327342] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:51.844 [2024-11-16 16:41:29.327377] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:51.844 16:41:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.844 16:41:29 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:51.844 16:41:29 -- common/autotest_common.sh@650 -- # local es=0 00:21:51.844 16:41:29 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:51.844 16:41:29 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:51.844 16:41:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:51.844 16:41:29 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:52.103 16:41:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:52.103 16:41:29 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:52.103 16:41:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.103 16:41:29 -- common/autotest_common.sh@10 -- # set +x 00:21:52.103 2024/11/16 16:41:29 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:21:52.103 request: 00:21:52.103 { 00:21:52.103 "method": "bdev_nvme_start_discovery", 00:21:52.103 "params": { 00:21:52.103 "name": "nvme", 00:21:52.103 "trtype": "tcp", 00:21:52.103 "traddr": "10.0.0.2", 00:21:52.103 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:52.103 "adrfam": "ipv4", 00:21:52.103 "trsvcid": "8009", 00:21:52.103 "wait_for_attach": true 00:21:52.103 } 00:21:52.103 } 00:21:52.103 Got JSON-RPC error response 00:21:52.103 GoRPCClient: error on JSON-RPC call 00:21:52.103 16:41:29 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:52.103 16:41:29 -- common/autotest_common.sh@653 -- # es=1 00:21:52.103 16:41:29 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:52.103 16:41:29 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:52.103 16:41:29 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:52.103 16:41:29 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:21:52.103 16:41:29 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:52.103 16:41:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.103 16:41:29 -- common/autotest_common.sh@10 -- # set +x 00:21:52.103 16:41:29 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:52.103 16:41:29 -- host/discovery.sh@67 -- # sort 00:21:52.103 16:41:29 -- host/discovery.sh@67 -- # xargs 00:21:52.103 16:41:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.103 16:41:29 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:21:52.103 16:41:29 -- host/discovery.sh@147 -- # get_bdev_list 00:21:52.103 16:41:29 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:52.103 16:41:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.103 16:41:29 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:52.103 16:41:29 -- host/discovery.sh@55 -- # xargs 00:21:52.103 16:41:29 -- common/autotest_common.sh@10 -- # set +x 00:21:52.103 16:41:29 -- host/discovery.sh@55 -- # sort 00:21:52.103 16:41:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.103 16:41:29 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:52.103 16:41:29 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:52.103 16:41:29 -- common/autotest_common.sh@650 -- # local es=0 00:21:52.103 16:41:29 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:52.103 16:41:29 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:52.103 16:41:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:52.103 16:41:29 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:52.103 16:41:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:52.103 16:41:29 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:52.103 16:41:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.103 16:41:29 -- common/autotest_common.sh@10 -- # set +x 00:21:52.103 2024/11/16 16:41:29 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:21:52.103 request: 00:21:52.103 { 00:21:52.103 "method": "bdev_nvme_start_discovery", 00:21:52.103 "params": { 00:21:52.103 "name": "nvme_second", 00:21:52.103 "trtype": "tcp", 00:21:52.103 "traddr": "10.0.0.2", 00:21:52.103 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:52.103 "adrfam": "ipv4", 00:21:52.103 "trsvcid": "8009", 00:21:52.103 "wait_for_attach": true 00:21:52.103 } 00:21:52.103 } 00:21:52.103 Got JSON-RPC error response 00:21:52.103 GoRPCClient: error on JSON-RPC call 00:21:52.103 16:41:29 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:52.103 16:41:29 -- common/autotest_common.sh@653 -- # es=1 00:21:52.103 16:41:29 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:52.103 16:41:29 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:52.103 16:41:29 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:52.103 16:41:29 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:21:52.103 16:41:29 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:52.103 16:41:29 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:52.103 16:41:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.103 16:41:29 -- common/autotest_common.sh@10 -- # set +x 00:21:52.103 16:41:29 -- host/discovery.sh@67 -- # sort 00:21:52.103 16:41:29 -- host/discovery.sh@67 -- # xargs 00:21:52.103 16:41:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.103 16:41:29 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:21:52.103 16:41:29 -- host/discovery.sh@153 -- # get_bdev_list 00:21:52.103 16:41:29 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:52.103 16:41:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.103 16:41:29 -- common/autotest_common.sh@10 -- # set +x 00:21:52.103 16:41:29 -- host/discovery.sh@55 -- # sort 00:21:52.103 16:41:29 -- host/discovery.sh@55 -- # xargs 00:21:52.103 16:41:29 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:52.103 16:41:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.103 16:41:29 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:52.103 16:41:29 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:52.103 16:41:29 -- common/autotest_common.sh@650 -- # local es=0 00:21:52.103 16:41:29 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:52.103 16:41:29 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:52.103 16:41:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:52.103 16:41:29 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:52.103 16:41:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:52.103 16:41:29 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:52.103 16:41:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.103 16:41:29 -- common/autotest_common.sh@10 -- # set +x 00:21:53.491 [2024-11-16 16:41:30.593917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:53.491 [2024-11-16 16:41:30.594017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:53.491 [2024-11-16 16:41:30.594040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c8f80 with addr=10.0.0.2, port=8010 00:21:53.491 [2024-11-16 16:41:30.594078] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:53.491 [2024-11-16 16:41:30.594093] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:53.491 [2024-11-16 16:41:30.594103] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:21:54.122 [2024-11-16 16:41:31.593837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.122 [2024-11-16 16:41:31.593919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.122 [2024-11-16 16:41:31.593939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a1ca0 with addr=10.0.0.2, port=8010 00:21:54.122 [2024-11-16 16:41:31.593954] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:54.122 [2024-11-16 16:41:31.593963] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:54.122 [2024-11-16 16:41:31.593973] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:21:55.505 [2024-11-16 16:41:32.593762] bdev_nvme.c:6802:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:21:55.505 2024/11/16 16:41:32 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:21:55.505 request: 00:21:55.505 { 00:21:55.505 "method": "bdev_nvme_start_discovery", 00:21:55.505 "params": { 00:21:55.505 "name": "nvme_second", 00:21:55.505 "trtype": "tcp", 00:21:55.505 "traddr": "10.0.0.2", 00:21:55.505 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:55.505 "adrfam": "ipv4", 00:21:55.505 "trsvcid": "8010", 00:21:55.505 "attach_timeout_ms": 3000 00:21:55.505 } 00:21:55.505 } 00:21:55.505 Got JSON-RPC error response 00:21:55.505 GoRPCClient: error on JSON-RPC call 00:21:55.505 16:41:32 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:55.505 16:41:32 -- common/autotest_common.sh@653 -- # es=1 00:21:55.505 16:41:32 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:55.505 16:41:32 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:55.505 16:41:32 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:55.505 16:41:32 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:21:55.505 16:41:32 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:55.505 16:41:32 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:55.505 16:41:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.505 16:41:32 -- host/discovery.sh@67 -- # sort 00:21:55.505 16:41:32 -- common/autotest_common.sh@10 -- # set +x 00:21:55.505 16:41:32 -- host/discovery.sh@67 -- # xargs 00:21:55.505 16:41:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.505 16:41:32 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:21:55.505 16:41:32 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:21:55.505 16:41:32 -- host/discovery.sh@162 -- # kill 96465 00:21:55.505 16:41:32 -- host/discovery.sh@163 -- # nvmftestfini 00:21:55.505 16:41:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:55.505 16:41:32 -- nvmf/common.sh@116 -- # sync 00:21:55.505 16:41:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:55.505 16:41:32 -- nvmf/common.sh@119 -- # set +e 00:21:55.505 16:41:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:55.505 16:41:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:55.505 rmmod nvme_tcp 00:21:55.505 rmmod nvme_fabrics 00:21:55.505 rmmod nvme_keyring 00:21:55.505 16:41:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:55.505 16:41:32 -- nvmf/common.sh@123 -- # set -e 00:21:55.505 16:41:32 -- nvmf/common.sh@124 -- # return 0 00:21:55.505 16:41:32 -- nvmf/common.sh@477 -- # '[' -n 96408 ']' 00:21:55.505 16:41:32 -- nvmf/common.sh@478 -- # killprocess 96408 00:21:55.505 16:41:32 -- common/autotest_common.sh@936 -- # '[' -z 96408 ']' 00:21:55.505 16:41:32 -- common/autotest_common.sh@940 -- # kill -0 96408 00:21:55.505 16:41:32 -- common/autotest_common.sh@941 -- # uname 00:21:55.505 16:41:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:55.505 16:41:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 96408 00:21:55.505 16:41:32 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:55.505 16:41:32 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:55.505 killing process with pid 96408 00:21:55.505 16:41:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 96408' 00:21:55.505 16:41:32 -- common/autotest_common.sh@955 -- # kill 96408 00:21:55.505 16:41:32 -- common/autotest_common.sh@960 -- # wait 96408 00:21:55.765 16:41:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:55.765 16:41:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:55.765 16:41:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:55.765 16:41:33 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:55.765 16:41:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:55.765 16:41:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:55.765 16:41:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:55.765 16:41:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:55.765 16:41:33 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:55.765 00:21:55.765 real 0m14.182s 00:21:55.765 user 0m27.618s 00:21:55.765 sys 0m1.750s 00:21:55.765 16:41:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:55.765 ************************************ 00:21:55.765 END TEST nvmf_discovery 00:21:55.765 16:41:33 -- common/autotest_common.sh@10 -- # set +x 00:21:55.765 ************************************ 00:21:55.765 16:41:33 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:21:55.765 16:41:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:55.765 16:41:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:55.765 16:41:33 -- common/autotest_common.sh@10 -- # set +x 00:21:55.765 ************************************ 00:21:55.765 START TEST nvmf_discovery_remove_ifc 00:21:55.765 ************************************ 00:21:55.765 16:41:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:21:55.765 * Looking for test storage... 00:21:55.765 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:55.765 16:41:33 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:21:55.765 16:41:33 -- common/autotest_common.sh@1690 -- # lcov --version 00:21:55.765 16:41:33 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:21:55.765 16:41:33 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:21:55.765 16:41:33 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:21:55.765 16:41:33 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:55.765 16:41:33 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:55.765 16:41:33 -- scripts/common.sh@335 -- # IFS=.-: 00:21:55.765 16:41:33 -- scripts/common.sh@335 -- # read -ra ver1 00:21:55.765 16:41:33 -- scripts/common.sh@336 -- # IFS=.-: 00:21:55.765 16:41:33 -- scripts/common.sh@336 -- # read -ra ver2 00:21:55.765 16:41:33 -- scripts/common.sh@337 -- # local 'op=<' 00:21:55.765 16:41:33 -- scripts/common.sh@339 -- # ver1_l=2 00:21:55.765 16:41:33 -- scripts/common.sh@340 -- # ver2_l=1 00:21:55.765 16:41:33 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:55.765 16:41:33 -- scripts/common.sh@343 -- # case "$op" in 00:21:55.765 16:41:33 -- scripts/common.sh@344 -- # : 1 00:21:55.765 16:41:33 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:55.765 16:41:33 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:56.026 16:41:33 -- scripts/common.sh@364 -- # decimal 1 00:21:56.026 16:41:33 -- scripts/common.sh@352 -- # local d=1 00:21:56.026 16:41:33 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:56.026 16:41:33 -- scripts/common.sh@354 -- # echo 1 00:21:56.026 16:41:33 -- scripts/common.sh@364 -- # ver1[v]=1 00:21:56.026 16:41:33 -- scripts/common.sh@365 -- # decimal 2 00:21:56.026 16:41:33 -- scripts/common.sh@352 -- # local d=2 00:21:56.026 16:41:33 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:56.026 16:41:33 -- scripts/common.sh@354 -- # echo 2 00:21:56.026 16:41:33 -- scripts/common.sh@365 -- # ver2[v]=2 00:21:56.026 16:41:33 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:56.026 16:41:33 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:56.026 16:41:33 -- scripts/common.sh@367 -- # return 0 00:21:56.026 16:41:33 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:56.026 16:41:33 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:21:56.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:56.026 --rc genhtml_branch_coverage=1 00:21:56.026 --rc genhtml_function_coverage=1 00:21:56.026 --rc genhtml_legend=1 00:21:56.026 --rc geninfo_all_blocks=1 00:21:56.026 --rc geninfo_unexecuted_blocks=1 00:21:56.026 00:21:56.026 ' 00:21:56.026 16:41:33 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:21:56.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:56.026 --rc genhtml_branch_coverage=1 00:21:56.026 --rc genhtml_function_coverage=1 00:21:56.026 --rc genhtml_legend=1 00:21:56.026 --rc geninfo_all_blocks=1 00:21:56.026 --rc geninfo_unexecuted_blocks=1 00:21:56.026 00:21:56.026 ' 00:21:56.026 16:41:33 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:21:56.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:56.026 --rc genhtml_branch_coverage=1 00:21:56.026 --rc genhtml_function_coverage=1 00:21:56.026 --rc genhtml_legend=1 00:21:56.026 --rc geninfo_all_blocks=1 00:21:56.026 --rc geninfo_unexecuted_blocks=1 00:21:56.026 00:21:56.026 ' 00:21:56.026 16:41:33 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:21:56.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:56.026 --rc genhtml_branch_coverage=1 00:21:56.026 --rc genhtml_function_coverage=1 00:21:56.026 --rc genhtml_legend=1 00:21:56.026 --rc geninfo_all_blocks=1 00:21:56.026 --rc geninfo_unexecuted_blocks=1 00:21:56.026 00:21:56.026 ' 00:21:56.026 16:41:33 -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:56.026 16:41:33 -- nvmf/common.sh@7 -- # uname -s 00:21:56.026 16:41:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:56.026 16:41:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:56.026 16:41:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:56.026 16:41:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:56.026 16:41:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:56.026 16:41:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:56.026 16:41:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:56.026 16:41:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:56.026 16:41:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:56.026 16:41:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:56.026 16:41:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:21:56.026 16:41:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:21:56.026 16:41:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:56.026 16:41:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:56.026 16:41:33 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:56.026 16:41:33 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:56.026 16:41:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:56.026 16:41:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:56.026 16:41:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:56.026 16:41:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.026 16:41:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.026 16:41:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.026 16:41:33 -- paths/export.sh@5 -- # export PATH 00:21:56.026 16:41:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.026 16:41:33 -- nvmf/common.sh@46 -- # : 0 00:21:56.026 16:41:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:56.026 16:41:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:56.026 16:41:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:56.026 16:41:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:56.026 16:41:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:56.026 16:41:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:56.026 16:41:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:56.026 16:41:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:56.026 16:41:33 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:21:56.026 16:41:33 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:21:56.026 16:41:33 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:21:56.026 16:41:33 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:21:56.026 16:41:33 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:21:56.026 16:41:33 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:21:56.026 16:41:33 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:21:56.026 16:41:33 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:56.026 16:41:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:56.026 16:41:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:56.026 16:41:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:56.026 16:41:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:56.026 16:41:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:56.026 16:41:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:56.026 16:41:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:56.026 16:41:33 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:56.026 16:41:33 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:56.026 16:41:33 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:56.026 16:41:33 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:56.026 16:41:33 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:56.026 16:41:33 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:56.026 16:41:33 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:56.026 16:41:33 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:56.026 16:41:33 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:56.026 16:41:33 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:56.026 16:41:33 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:56.026 16:41:33 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:56.026 16:41:33 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:56.026 16:41:33 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:56.026 16:41:33 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:56.026 16:41:33 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:56.026 16:41:33 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:56.026 16:41:33 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:56.026 16:41:33 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:56.026 16:41:33 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:56.026 Cannot find device "nvmf_tgt_br" 00:21:56.026 16:41:33 -- nvmf/common.sh@154 -- # true 00:21:56.026 16:41:33 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:56.026 Cannot find device "nvmf_tgt_br2" 00:21:56.026 16:41:33 -- nvmf/common.sh@155 -- # true 00:21:56.026 16:41:33 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:56.026 16:41:33 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:56.026 Cannot find device "nvmf_tgt_br" 00:21:56.026 16:41:33 -- nvmf/common.sh@157 -- # true 00:21:56.026 16:41:33 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:56.026 Cannot find device "nvmf_tgt_br2" 00:21:56.026 16:41:33 -- nvmf/common.sh@158 -- # true 00:21:56.026 16:41:33 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:21:56.026 16:41:33 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:21:56.026 16:41:33 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:56.026 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:56.026 16:41:33 -- nvmf/common.sh@161 -- # true 00:21:56.026 16:41:33 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:56.026 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:56.026 16:41:33 -- nvmf/common.sh@162 -- # true 00:21:56.026 16:41:33 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:21:56.026 16:41:33 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:56.026 16:41:33 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:56.026 16:41:33 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:56.026 16:41:33 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:56.027 16:41:33 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:56.027 16:41:33 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:56.027 16:41:33 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:56.027 16:41:33 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:56.027 16:41:33 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:56.285 16:41:33 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:56.285 16:41:33 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:56.285 16:41:33 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:56.285 16:41:33 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:56.285 16:41:33 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:56.285 16:41:33 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:56.285 16:41:33 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:56.285 16:41:33 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:56.285 16:41:33 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:56.285 16:41:33 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:56.285 16:41:33 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:56.285 16:41:33 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:56.285 16:41:33 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:56.285 16:41:33 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:21:56.285 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:56.285 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:21:56.285 00:21:56.285 --- 10.0.0.2 ping statistics --- 00:21:56.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:56.285 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:21:56.285 16:41:33 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:21:56.285 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:56.285 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.098 ms 00:21:56.285 00:21:56.285 --- 10.0.0.3 ping statistics --- 00:21:56.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:56.285 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:21:56.285 16:41:33 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:56.285 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:56.285 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:21:56.285 00:21:56.285 --- 10.0.0.1 ping statistics --- 00:21:56.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:56.285 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:21:56.285 16:41:33 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:56.285 16:41:33 -- nvmf/common.sh@421 -- # return 0 00:21:56.285 16:41:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:56.285 16:41:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:56.285 16:41:33 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:56.285 16:41:33 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:56.285 16:41:33 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:56.285 16:41:33 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:56.285 16:41:33 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:56.285 16:41:33 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:21:56.285 16:41:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:56.285 16:41:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:56.285 16:41:33 -- common/autotest_common.sh@10 -- # set +x 00:21:56.285 16:41:33 -- nvmf/common.sh@469 -- # nvmfpid=96971 00:21:56.285 16:41:33 -- nvmf/common.sh@470 -- # waitforlisten 96971 00:21:56.285 16:41:33 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:56.285 16:41:33 -- common/autotest_common.sh@829 -- # '[' -z 96971 ']' 00:21:56.285 16:41:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:56.285 16:41:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:56.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:56.285 16:41:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:56.285 16:41:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:56.285 16:41:33 -- common/autotest_common.sh@10 -- # set +x 00:21:56.285 [2024-11-16 16:41:33.712924] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:56.286 [2024-11-16 16:41:33.713008] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:56.544 [2024-11-16 16:41:33.853949] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:56.544 [2024-11-16 16:41:33.911308] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:56.544 [2024-11-16 16:41:33.911464] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:56.544 [2024-11-16 16:41:33.911476] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:56.544 [2024-11-16 16:41:33.911483] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:56.544 [2024-11-16 16:41:33.911505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:57.112 16:41:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:57.112 16:41:34 -- common/autotest_common.sh@862 -- # return 0 00:21:57.112 16:41:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:57.112 16:41:34 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:57.371 16:41:34 -- common/autotest_common.sh@10 -- # set +x 00:21:57.371 16:41:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:57.371 16:41:34 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:21:57.371 16:41:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.371 16:41:34 -- common/autotest_common.sh@10 -- # set +x 00:21:57.371 [2024-11-16 16:41:34.663497] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:57.371 [2024-11-16 16:41:34.671622] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:21:57.371 null0 00:21:57.371 [2024-11-16 16:41:34.703542] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:57.371 16:41:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.371 16:41:34 -- host/discovery_remove_ifc.sh@59 -- # hostpid=97021 00:21:57.371 16:41:34 -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:21:57.371 16:41:34 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 97021 /tmp/host.sock 00:21:57.371 16:41:34 -- common/autotest_common.sh@829 -- # '[' -z 97021 ']' 00:21:57.371 16:41:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:21:57.371 16:41:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:57.371 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:21:57.371 16:41:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:21:57.371 16:41:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:57.371 16:41:34 -- common/autotest_common.sh@10 -- # set +x 00:21:57.371 [2024-11-16 16:41:34.782611] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:57.371 [2024-11-16 16:41:34.782695] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97021 ] 00:21:57.629 [2024-11-16 16:41:34.922563] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:57.629 [2024-11-16 16:41:34.994651] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:57.629 [2024-11-16 16:41:34.994806] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:58.566 16:41:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:58.566 16:41:35 -- common/autotest_common.sh@862 -- # return 0 00:21:58.566 16:41:35 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:58.566 16:41:35 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:21:58.566 16:41:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.566 16:41:35 -- common/autotest_common.sh@10 -- # set +x 00:21:58.566 16:41:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.566 16:41:35 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:21:58.566 16:41:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.566 16:41:35 -- common/autotest_common.sh@10 -- # set +x 00:21:58.566 16:41:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.566 16:41:35 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:21:58.566 16:41:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.566 16:41:35 -- common/autotest_common.sh@10 -- # set +x 00:21:59.501 [2024-11-16 16:41:36.826739] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:59.501 [2024-11-16 16:41:36.826774] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:59.501 [2024-11-16 16:41:36.826793] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:59.501 [2024-11-16 16:41:36.914120] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:21:59.501 [2024-11-16 16:41:36.976739] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:21:59.501 [2024-11-16 16:41:36.976798] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:21:59.501 [2024-11-16 16:41:36.976830] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:21:59.501 [2024-11-16 16:41:36.976847] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:59.501 [2024-11-16 16:41:36.976867] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:59.501 16:41:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.501 16:41:36 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:21:59.501 16:41:36 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:59.501 16:41:36 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:59.501 16:41:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.501 16:41:36 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:59.501 16:41:36 -- common/autotest_common.sh@10 -- # set +x 00:21:59.501 16:41:36 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:59.501 16:41:36 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:59.501 [2024-11-16 16:41:36.985134] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x9a2da0 was disconnected and freed. delete nvme_qpair. 00:21:59.760 16:41:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.760 16:41:37 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:21:59.760 16:41:37 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:21:59.760 16:41:37 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:21:59.760 16:41:37 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:21:59.760 16:41:37 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:59.760 16:41:37 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:59.760 16:41:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.760 16:41:37 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:59.760 16:41:37 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:59.760 16:41:37 -- common/autotest_common.sh@10 -- # set +x 00:21:59.760 16:41:37 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:59.760 16:41:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.760 16:41:37 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:59.760 16:41:37 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:00.696 16:41:38 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:00.696 16:41:38 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:00.696 16:41:38 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:00.696 16:41:38 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:00.696 16:41:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.696 16:41:38 -- common/autotest_common.sh@10 -- # set +x 00:22:00.696 16:41:38 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:00.696 16:41:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.696 16:41:38 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:00.696 16:41:38 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:02.072 16:41:39 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:02.072 16:41:39 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:02.072 16:41:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.072 16:41:39 -- common/autotest_common.sh@10 -- # set +x 00:22:02.072 16:41:39 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:02.072 16:41:39 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:02.072 16:41:39 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:02.072 16:41:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.072 16:41:39 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:02.072 16:41:39 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:03.007 16:41:40 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:03.007 16:41:40 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:03.007 16:41:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.007 16:41:40 -- common/autotest_common.sh@10 -- # set +x 00:22:03.007 16:41:40 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:03.007 16:41:40 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:03.007 16:41:40 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:03.007 16:41:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.007 16:41:40 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:03.007 16:41:40 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:03.943 16:41:41 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:03.943 16:41:41 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:03.943 16:41:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.943 16:41:41 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:03.943 16:41:41 -- common/autotest_common.sh@10 -- # set +x 00:22:03.943 16:41:41 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:03.943 16:41:41 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:03.943 16:41:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.943 16:41:41 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:03.943 16:41:41 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:04.879 16:41:42 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:04.879 16:41:42 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:04.879 16:41:42 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:04.879 16:41:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.879 16:41:42 -- common/autotest_common.sh@10 -- # set +x 00:22:04.879 16:41:42 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:04.879 16:41:42 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:05.138 16:41:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.138 16:41:42 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:05.138 16:41:42 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:05.138 [2024-11-16 16:41:42.404746] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:22:05.138 [2024-11-16 16:41:42.404812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:05.138 [2024-11-16 16:41:42.404828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.138 [2024-11-16 16:41:42.404840] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:05.138 [2024-11-16 16:41:42.404848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.138 [2024-11-16 16:41:42.404857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:05.138 [2024-11-16 16:41:42.404866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.138 [2024-11-16 16:41:42.404875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:05.138 [2024-11-16 16:41:42.404883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.138 [2024-11-16 16:41:42.404893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:05.138 [2024-11-16 16:41:42.404901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.138 [2024-11-16 16:41:42.404909] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90c690 is same with the state(5) to be set 00:22:05.138 [2024-11-16 16:41:42.414742] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x90c690 (9): Bad file descriptor 00:22:05.138 [2024-11-16 16:41:42.424764] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:06.074 16:41:43 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:06.074 16:41:43 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:06.074 16:41:43 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:06.074 16:41:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.074 16:41:43 -- common/autotest_common.sh@10 -- # set +x 00:22:06.074 16:41:43 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:06.074 16:41:43 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:06.074 [2024-11-16 16:41:43.432135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:22:07.010 [2024-11-16 16:41:44.456203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:22:07.010 [2024-11-16 16:41:44.456304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x90c690 with addr=10.0.0.2, port=4420 00:22:07.010 [2024-11-16 16:41:44.456341] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90c690 is same with the state(5) to be set 00:22:07.010 [2024-11-16 16:41:44.456390] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:07.010 [2024-11-16 16:41:44.456416] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:07.010 [2024-11-16 16:41:44.456439] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:07.010 [2024-11-16 16:41:44.456464] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:22:07.010 [2024-11-16 16:41:44.457293] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x90c690 (9): Bad file descriptor 00:22:07.010 [2024-11-16 16:41:44.457363] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:07.010 [2024-11-16 16:41:44.457421] bdev_nvme.c:6510:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:22:07.010 [2024-11-16 16:41:44.457500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.010 [2024-11-16 16:41:44.457535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.010 [2024-11-16 16:41:44.457564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.010 [2024-11-16 16:41:44.457588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.010 [2024-11-16 16:41:44.457614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.010 [2024-11-16 16:41:44.457637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.010 [2024-11-16 16:41:44.457661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.010 [2024-11-16 16:41:44.457685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.010 [2024-11-16 16:41:44.457711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.010 [2024-11-16 16:41:44.457733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.010 [2024-11-16 16:41:44.457756] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:22:07.010 [2024-11-16 16:41:44.457824] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x96a410 (9): Bad file descriptor 00:22:07.010 [2024-11-16 16:41:44.458827] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:22:07.010 [2024-11-16 16:41:44.458880] nvme_ctrlr.c:1136:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:22:07.010 16:41:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.010 16:41:44 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:07.010 16:41:44 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:08.387 16:41:45 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:08.387 16:41:45 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:08.387 16:41:45 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:08.387 16:41:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.387 16:41:45 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:08.387 16:41:45 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:08.387 16:41:45 -- common/autotest_common.sh@10 -- # set +x 00:22:08.387 16:41:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.387 16:41:45 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:22:08.387 16:41:45 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:08.387 16:41:45 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:08.387 16:41:45 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:22:08.387 16:41:45 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:08.387 16:41:45 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:08.387 16:41:45 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:08.387 16:41:45 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:08.387 16:41:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.387 16:41:45 -- common/autotest_common.sh@10 -- # set +x 00:22:08.387 16:41:45 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:08.387 16:41:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.387 16:41:45 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:08.387 16:41:45 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:09.324 [2024-11-16 16:41:46.467969] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:09.324 [2024-11-16 16:41:46.468146] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:09.324 [2024-11-16 16:41:46.468180] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:09.324 [2024-11-16 16:41:46.555065] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:22:09.324 16:41:46 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:09.324 16:41:46 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:09.324 16:41:46 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:09.324 16:41:46 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:09.324 16:41:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.324 16:41:46 -- common/autotest_common.sh@10 -- # set +x 00:22:09.324 16:41:46 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:09.324 16:41:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.324 [2024-11-16 16:41:46.610095] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:09.324 [2024-11-16 16:41:46.610143] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:09.324 [2024-11-16 16:41:46.610168] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:09.324 [2024-11-16 16:41:46.610184] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:22:09.324 [2024-11-16 16:41:46.610192] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:09.324 [2024-11-16 16:41:46.616449] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x9700c0 was disconnected and freed. delete nvme_qpair. 00:22:09.324 16:41:46 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:09.324 16:41:46 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:10.259 16:41:47 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:10.259 16:41:47 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:10.259 16:41:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.259 16:41:47 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:10.259 16:41:47 -- common/autotest_common.sh@10 -- # set +x 00:22:10.259 16:41:47 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:10.259 16:41:47 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:10.259 16:41:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.259 16:41:47 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:22:10.259 16:41:47 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:22:10.259 16:41:47 -- host/discovery_remove_ifc.sh@90 -- # killprocess 97021 00:22:10.259 16:41:47 -- common/autotest_common.sh@936 -- # '[' -z 97021 ']' 00:22:10.259 16:41:47 -- common/autotest_common.sh@940 -- # kill -0 97021 00:22:10.259 16:41:47 -- common/autotest_common.sh@941 -- # uname 00:22:10.259 16:41:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:10.259 16:41:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97021 00:22:10.259 killing process with pid 97021 00:22:10.259 16:41:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:10.259 16:41:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:10.259 16:41:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97021' 00:22:10.259 16:41:47 -- common/autotest_common.sh@955 -- # kill 97021 00:22:10.259 16:41:47 -- common/autotest_common.sh@960 -- # wait 97021 00:22:10.518 16:41:47 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:22:10.518 16:41:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:10.518 16:41:47 -- nvmf/common.sh@116 -- # sync 00:22:10.777 16:41:48 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:10.777 16:41:48 -- nvmf/common.sh@119 -- # set +e 00:22:10.777 16:41:48 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:10.777 16:41:48 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:10.777 rmmod nvme_tcp 00:22:10.777 rmmod nvme_fabrics 00:22:10.777 rmmod nvme_keyring 00:22:10.777 16:41:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:10.777 16:41:48 -- nvmf/common.sh@123 -- # set -e 00:22:10.777 16:41:48 -- nvmf/common.sh@124 -- # return 0 00:22:10.777 16:41:48 -- nvmf/common.sh@477 -- # '[' -n 96971 ']' 00:22:10.777 16:41:48 -- nvmf/common.sh@478 -- # killprocess 96971 00:22:10.777 16:41:48 -- common/autotest_common.sh@936 -- # '[' -z 96971 ']' 00:22:10.777 16:41:48 -- common/autotest_common.sh@940 -- # kill -0 96971 00:22:10.777 16:41:48 -- common/autotest_common.sh@941 -- # uname 00:22:10.777 16:41:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:10.777 16:41:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 96971 00:22:10.777 killing process with pid 96971 00:22:10.777 16:41:48 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:10.777 16:41:48 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:10.777 16:41:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 96971' 00:22:10.777 16:41:48 -- common/autotest_common.sh@955 -- # kill 96971 00:22:10.777 16:41:48 -- common/autotest_common.sh@960 -- # wait 96971 00:22:11.036 16:41:48 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:11.036 16:41:48 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:11.036 16:41:48 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:11.036 16:41:48 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:11.036 16:41:48 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:11.036 16:41:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:11.036 16:41:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:11.036 16:41:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:11.036 16:41:48 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:11.036 00:22:11.036 real 0m15.228s 00:22:11.036 user 0m26.394s 00:22:11.036 sys 0m1.580s 00:22:11.036 16:41:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:11.036 16:41:48 -- common/autotest_common.sh@10 -- # set +x 00:22:11.036 ************************************ 00:22:11.036 END TEST nvmf_discovery_remove_ifc 00:22:11.036 ************************************ 00:22:11.036 16:41:48 -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:22:11.036 16:41:48 -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:22:11.036 16:41:48 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:11.036 16:41:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:11.036 16:41:48 -- common/autotest_common.sh@10 -- # set +x 00:22:11.036 ************************************ 00:22:11.036 START TEST nvmf_digest 00:22:11.036 ************************************ 00:22:11.036 16:41:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:22:11.036 * Looking for test storage... 00:22:11.036 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:11.036 16:41:48 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:22:11.036 16:41:48 -- common/autotest_common.sh@1690 -- # lcov --version 00:22:11.036 16:41:48 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:22:11.295 16:41:48 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:22:11.295 16:41:48 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:22:11.295 16:41:48 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:22:11.295 16:41:48 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:22:11.295 16:41:48 -- scripts/common.sh@335 -- # IFS=.-: 00:22:11.295 16:41:48 -- scripts/common.sh@335 -- # read -ra ver1 00:22:11.295 16:41:48 -- scripts/common.sh@336 -- # IFS=.-: 00:22:11.295 16:41:48 -- scripts/common.sh@336 -- # read -ra ver2 00:22:11.295 16:41:48 -- scripts/common.sh@337 -- # local 'op=<' 00:22:11.295 16:41:48 -- scripts/common.sh@339 -- # ver1_l=2 00:22:11.295 16:41:48 -- scripts/common.sh@340 -- # ver2_l=1 00:22:11.295 16:41:48 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:22:11.295 16:41:48 -- scripts/common.sh@343 -- # case "$op" in 00:22:11.295 16:41:48 -- scripts/common.sh@344 -- # : 1 00:22:11.295 16:41:48 -- scripts/common.sh@363 -- # (( v = 0 )) 00:22:11.295 16:41:48 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:11.295 16:41:48 -- scripts/common.sh@364 -- # decimal 1 00:22:11.295 16:41:48 -- scripts/common.sh@352 -- # local d=1 00:22:11.295 16:41:48 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:11.295 16:41:48 -- scripts/common.sh@354 -- # echo 1 00:22:11.295 16:41:48 -- scripts/common.sh@364 -- # ver1[v]=1 00:22:11.295 16:41:48 -- scripts/common.sh@365 -- # decimal 2 00:22:11.295 16:41:48 -- scripts/common.sh@352 -- # local d=2 00:22:11.295 16:41:48 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:11.295 16:41:48 -- scripts/common.sh@354 -- # echo 2 00:22:11.295 16:41:48 -- scripts/common.sh@365 -- # ver2[v]=2 00:22:11.295 16:41:48 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:11.295 16:41:48 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:22:11.295 16:41:48 -- scripts/common.sh@367 -- # return 0 00:22:11.295 16:41:48 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:11.295 16:41:48 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:22:11.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.295 --rc genhtml_branch_coverage=1 00:22:11.295 --rc genhtml_function_coverage=1 00:22:11.295 --rc genhtml_legend=1 00:22:11.295 --rc geninfo_all_blocks=1 00:22:11.295 --rc geninfo_unexecuted_blocks=1 00:22:11.295 00:22:11.295 ' 00:22:11.295 16:41:48 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:22:11.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.295 --rc genhtml_branch_coverage=1 00:22:11.295 --rc genhtml_function_coverage=1 00:22:11.295 --rc genhtml_legend=1 00:22:11.295 --rc geninfo_all_blocks=1 00:22:11.295 --rc geninfo_unexecuted_blocks=1 00:22:11.295 00:22:11.295 ' 00:22:11.295 16:41:48 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:22:11.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.295 --rc genhtml_branch_coverage=1 00:22:11.295 --rc genhtml_function_coverage=1 00:22:11.295 --rc genhtml_legend=1 00:22:11.295 --rc geninfo_all_blocks=1 00:22:11.295 --rc geninfo_unexecuted_blocks=1 00:22:11.295 00:22:11.295 ' 00:22:11.295 16:41:48 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:22:11.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.295 --rc genhtml_branch_coverage=1 00:22:11.295 --rc genhtml_function_coverage=1 00:22:11.295 --rc genhtml_legend=1 00:22:11.295 --rc geninfo_all_blocks=1 00:22:11.295 --rc geninfo_unexecuted_blocks=1 00:22:11.295 00:22:11.295 ' 00:22:11.295 16:41:48 -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:11.295 16:41:48 -- nvmf/common.sh@7 -- # uname -s 00:22:11.295 16:41:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:11.295 16:41:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:11.295 16:41:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:11.295 16:41:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:11.295 16:41:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:11.295 16:41:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:11.295 16:41:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:11.295 16:41:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:11.295 16:41:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:11.295 16:41:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:11.295 16:41:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:22:11.295 16:41:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:22:11.295 16:41:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:11.295 16:41:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:11.295 16:41:48 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:11.295 16:41:48 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:11.295 16:41:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:11.295 16:41:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:11.295 16:41:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:11.295 16:41:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.295 16:41:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.295 16:41:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.295 16:41:48 -- paths/export.sh@5 -- # export PATH 00:22:11.295 16:41:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.295 16:41:48 -- nvmf/common.sh@46 -- # : 0 00:22:11.295 16:41:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:11.296 16:41:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:11.296 16:41:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:11.296 16:41:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:11.296 16:41:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:11.296 16:41:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:11.296 16:41:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:11.296 16:41:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:11.296 16:41:48 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:22:11.296 16:41:48 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:22:11.296 16:41:48 -- host/digest.sh@16 -- # runtime=2 00:22:11.296 16:41:48 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:22:11.296 16:41:48 -- host/digest.sh@132 -- # nvmftestinit 00:22:11.296 16:41:48 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:11.296 16:41:48 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:11.296 16:41:48 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:11.296 16:41:48 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:11.296 16:41:48 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:11.296 16:41:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:11.296 16:41:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:11.296 16:41:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:11.296 16:41:48 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:22:11.296 16:41:48 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:22:11.296 16:41:48 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:22:11.296 16:41:48 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:22:11.296 16:41:48 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:22:11.296 16:41:48 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:22:11.296 16:41:48 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:11.296 16:41:48 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:11.296 16:41:48 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:11.296 16:41:48 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:22:11.296 16:41:48 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:11.296 16:41:48 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:11.296 16:41:48 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:11.296 16:41:48 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:11.296 16:41:48 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:11.296 16:41:48 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:11.296 16:41:48 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:11.296 16:41:48 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:11.296 16:41:48 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:22:11.296 16:41:48 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:22:11.296 Cannot find device "nvmf_tgt_br" 00:22:11.296 16:41:48 -- nvmf/common.sh@154 -- # true 00:22:11.296 16:41:48 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:22:11.296 Cannot find device "nvmf_tgt_br2" 00:22:11.296 16:41:48 -- nvmf/common.sh@155 -- # true 00:22:11.296 16:41:48 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:22:11.296 16:41:48 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:22:11.296 Cannot find device "nvmf_tgt_br" 00:22:11.296 16:41:48 -- nvmf/common.sh@157 -- # true 00:22:11.296 16:41:48 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:22:11.296 Cannot find device "nvmf_tgt_br2" 00:22:11.296 16:41:48 -- nvmf/common.sh@158 -- # true 00:22:11.296 16:41:48 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:22:11.296 16:41:48 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:22:11.296 16:41:48 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:11.296 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:11.296 16:41:48 -- nvmf/common.sh@161 -- # true 00:22:11.296 16:41:48 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:11.296 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:11.296 16:41:48 -- nvmf/common.sh@162 -- # true 00:22:11.296 16:41:48 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:22:11.296 16:41:48 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:11.296 16:41:48 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:11.296 16:41:48 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:11.296 16:41:48 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:11.296 16:41:48 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:11.555 16:41:48 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:11.555 16:41:48 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:11.555 16:41:48 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:11.555 16:41:48 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:22:11.555 16:41:48 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:22:11.555 16:41:48 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:22:11.555 16:41:48 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:22:11.555 16:41:48 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:11.555 16:41:48 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:11.555 16:41:48 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:11.555 16:41:48 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:22:11.555 16:41:48 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:22:11.555 16:41:48 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:22:11.555 16:41:48 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:11.555 16:41:48 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:11.555 16:41:48 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:11.555 16:41:48 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:11.555 16:41:48 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:11.555 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:11.555 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:22:11.555 00:22:11.555 --- 10.0.0.2 ping statistics --- 00:22:11.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:11.555 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:22:11.555 16:41:48 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:11.555 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:11.555 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:22:11.555 00:22:11.555 --- 10.0.0.3 ping statistics --- 00:22:11.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:11.555 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:22:11.555 16:41:48 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:11.555 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:11.555 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.055 ms 00:22:11.555 00:22:11.555 --- 10.0.0.1 ping statistics --- 00:22:11.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:11.555 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:22:11.555 16:41:48 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:11.555 16:41:48 -- nvmf/common.sh@421 -- # return 0 00:22:11.555 16:41:48 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:11.555 16:41:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:11.555 16:41:48 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:11.555 16:41:48 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:11.555 16:41:48 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:11.555 16:41:48 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:11.555 16:41:48 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:11.555 16:41:48 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:11.555 16:41:48 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:22:11.555 16:41:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:11.555 16:41:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:11.555 16:41:48 -- common/autotest_common.sh@10 -- # set +x 00:22:11.555 ************************************ 00:22:11.555 START TEST nvmf_digest_clean 00:22:11.555 ************************************ 00:22:11.555 16:41:48 -- common/autotest_common.sh@1114 -- # run_digest 00:22:11.555 16:41:48 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:22:11.555 16:41:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:11.555 16:41:48 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:11.555 16:41:48 -- common/autotest_common.sh@10 -- # set +x 00:22:11.555 16:41:48 -- nvmf/common.sh@469 -- # nvmfpid=97461 00:22:11.555 16:41:48 -- nvmf/common.sh@470 -- # waitforlisten 97461 00:22:11.555 16:41:48 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:11.555 16:41:48 -- common/autotest_common.sh@829 -- # '[' -z 97461 ']' 00:22:11.555 16:41:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:11.555 16:41:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:11.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:11.555 16:41:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:11.555 16:41:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:11.555 16:41:48 -- common/autotest_common.sh@10 -- # set +x 00:22:11.555 [2024-11-16 16:41:49.032365] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:11.555 [2024-11-16 16:41:49.032456] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:11.816 [2024-11-16 16:41:49.175116] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:11.816 [2024-11-16 16:41:49.261937] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:11.816 [2024-11-16 16:41:49.262151] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:11.816 [2024-11-16 16:41:49.262203] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:11.816 [2024-11-16 16:41:49.262217] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:11.816 [2024-11-16 16:41:49.262253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:12.754 16:41:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:12.754 16:41:50 -- common/autotest_common.sh@862 -- # return 0 00:22:12.754 16:41:50 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:12.754 16:41:50 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:12.754 16:41:50 -- common/autotest_common.sh@10 -- # set +x 00:22:12.754 16:41:50 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:12.754 16:41:50 -- host/digest.sh@120 -- # common_target_config 00:22:12.754 16:41:50 -- host/digest.sh@43 -- # rpc_cmd 00:22:12.754 16:41:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.754 16:41:50 -- common/autotest_common.sh@10 -- # set +x 00:22:12.754 null0 00:22:12.754 [2024-11-16 16:41:50.198596] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:12.754 [2024-11-16 16:41:50.222722] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:12.754 16:41:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.754 16:41:50 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:22:12.754 16:41:50 -- host/digest.sh@77 -- # local rw bs qd 00:22:12.754 16:41:50 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:12.754 16:41:50 -- host/digest.sh@80 -- # rw=randread 00:22:12.754 16:41:50 -- host/digest.sh@80 -- # bs=4096 00:22:12.754 16:41:50 -- host/digest.sh@80 -- # qd=128 00:22:12.754 16:41:50 -- host/digest.sh@82 -- # bperfpid=97517 00:22:12.754 16:41:50 -- host/digest.sh@83 -- # waitforlisten 97517 /var/tmp/bperf.sock 00:22:12.754 16:41:50 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:22:12.754 16:41:50 -- common/autotest_common.sh@829 -- # '[' -z 97517 ']' 00:22:12.754 16:41:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:12.754 16:41:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:12.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:12.754 16:41:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:12.754 16:41:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:12.754 16:41:50 -- common/autotest_common.sh@10 -- # set +x 00:22:13.013 [2024-11-16 16:41:50.278388] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:13.013 [2024-11-16 16:41:50.278469] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97517 ] 00:22:13.013 [2024-11-16 16:41:50.420646] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:13.013 [2024-11-16 16:41:50.490843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:13.950 16:41:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:13.950 16:41:51 -- common/autotest_common.sh@862 -- # return 0 00:22:13.950 16:41:51 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:13.950 16:41:51 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:13.950 16:41:51 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:14.210 16:41:51 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:14.210 16:41:51 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:14.470 nvme0n1 00:22:14.470 16:41:51 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:14.470 16:41:51 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:14.730 Running I/O for 2 seconds... 00:22:16.635 00:22:16.635 Latency(us) 00:22:16.635 [2024-11-16T16:41:54.126Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:16.635 [2024-11-16T16:41:54.126Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:16.635 nvme0n1 : 2.00 23942.06 93.52 0.00 0.00 5341.76 2457.60 18945.86 00:22:16.635 [2024-11-16T16:41:54.126Z] =================================================================================================================== 00:22:16.635 [2024-11-16T16:41:54.126Z] Total : 23942.06 93.52 0.00 0.00 5341.76 2457.60 18945.86 00:22:16.635 0 00:22:16.635 16:41:53 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:16.635 16:41:53 -- host/digest.sh@92 -- # get_accel_stats 00:22:16.635 16:41:53 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:16.635 16:41:53 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:16.635 | select(.opcode=="crc32c") 00:22:16.635 | "\(.module_name) \(.executed)"' 00:22:16.635 16:41:53 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:16.895 16:41:54 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:16.895 16:41:54 -- host/digest.sh@93 -- # exp_module=software 00:22:16.895 16:41:54 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:16.895 16:41:54 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:16.895 16:41:54 -- host/digest.sh@97 -- # killprocess 97517 00:22:16.895 16:41:54 -- common/autotest_common.sh@936 -- # '[' -z 97517 ']' 00:22:16.895 16:41:54 -- common/autotest_common.sh@940 -- # kill -0 97517 00:22:16.895 16:41:54 -- common/autotest_common.sh@941 -- # uname 00:22:16.895 16:41:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:16.895 16:41:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97517 00:22:16.895 killing process with pid 97517 00:22:16.895 Received shutdown signal, test time was about 2.000000 seconds 00:22:16.895 00:22:16.895 Latency(us) 00:22:16.895 [2024-11-16T16:41:54.386Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:16.895 [2024-11-16T16:41:54.386Z] =================================================================================================================== 00:22:16.895 [2024-11-16T16:41:54.386Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:16.895 16:41:54 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:16.895 16:41:54 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:16.895 16:41:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97517' 00:22:16.895 16:41:54 -- common/autotest_common.sh@955 -- # kill 97517 00:22:16.895 16:41:54 -- common/autotest_common.sh@960 -- # wait 97517 00:22:17.155 16:41:54 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:22:17.155 16:41:54 -- host/digest.sh@77 -- # local rw bs qd 00:22:17.155 16:41:54 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:17.155 16:41:54 -- host/digest.sh@80 -- # rw=randread 00:22:17.155 16:41:54 -- host/digest.sh@80 -- # bs=131072 00:22:17.155 16:41:54 -- host/digest.sh@80 -- # qd=16 00:22:17.155 16:41:54 -- host/digest.sh@82 -- # bperfpid=97607 00:22:17.155 16:41:54 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:22:17.155 16:41:54 -- host/digest.sh@83 -- # waitforlisten 97607 /var/tmp/bperf.sock 00:22:17.155 16:41:54 -- common/autotest_common.sh@829 -- # '[' -z 97607 ']' 00:22:17.155 16:41:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:17.155 16:41:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:17.155 16:41:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:17.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:17.155 16:41:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:17.155 16:41:54 -- common/autotest_common.sh@10 -- # set +x 00:22:17.155 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:17.155 Zero copy mechanism will not be used. 00:22:17.155 [2024-11-16 16:41:54.515074] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:17.155 [2024-11-16 16:41:54.515143] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97607 ] 00:22:17.155 [2024-11-16 16:41:54.639794] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:17.414 [2024-11-16 16:41:54.698578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:17.981 16:41:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:17.981 16:41:55 -- common/autotest_common.sh@862 -- # return 0 00:22:17.981 16:41:55 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:17.981 16:41:55 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:17.981 16:41:55 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:18.549 16:41:55 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:18.549 16:41:55 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:18.549 nvme0n1 00:22:18.549 16:41:56 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:18.549 16:41:56 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:18.807 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:18.807 Zero copy mechanism will not be used. 00:22:18.807 Running I/O for 2 seconds... 00:22:20.711 00:22:20.711 Latency(us) 00:22:20.711 [2024-11-16T16:41:58.202Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:20.711 [2024-11-16T16:41:58.202Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:22:20.711 nvme0n1 : 2.04 8864.19 1108.02 0.00 0.00 1771.38 774.52 41943.04 00:22:20.711 [2024-11-16T16:41:58.202Z] =================================================================================================================== 00:22:20.711 [2024-11-16T16:41:58.202Z] Total : 8864.19 1108.02 0.00 0.00 1771.38 774.52 41943.04 00:22:20.711 0 00:22:20.711 16:41:58 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:20.712 16:41:58 -- host/digest.sh@92 -- # get_accel_stats 00:22:20.712 16:41:58 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:20.712 16:41:58 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:20.712 | select(.opcode=="crc32c") 00:22:20.712 | "\(.module_name) \(.executed)"' 00:22:20.712 16:41:58 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:20.970 16:41:58 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:20.970 16:41:58 -- host/digest.sh@93 -- # exp_module=software 00:22:20.970 16:41:58 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:20.970 16:41:58 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:20.970 16:41:58 -- host/digest.sh@97 -- # killprocess 97607 00:22:20.970 16:41:58 -- common/autotest_common.sh@936 -- # '[' -z 97607 ']' 00:22:20.970 16:41:58 -- common/autotest_common.sh@940 -- # kill -0 97607 00:22:20.970 16:41:58 -- common/autotest_common.sh@941 -- # uname 00:22:20.970 16:41:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:20.970 16:41:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97607 00:22:21.230 16:41:58 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:21.230 16:41:58 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:21.230 killing process with pid 97607 00:22:21.230 16:41:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97607' 00:22:21.230 16:41:58 -- common/autotest_common.sh@955 -- # kill 97607 00:22:21.230 Received shutdown signal, test time was about 2.000000 seconds 00:22:21.230 00:22:21.230 Latency(us) 00:22:21.230 [2024-11-16T16:41:58.721Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:21.230 [2024-11-16T16:41:58.721Z] =================================================================================================================== 00:22:21.230 [2024-11-16T16:41:58.721Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:21.230 16:41:58 -- common/autotest_common.sh@960 -- # wait 97607 00:22:21.230 16:41:58 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:22:21.230 16:41:58 -- host/digest.sh@77 -- # local rw bs qd 00:22:21.230 16:41:58 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:21.230 16:41:58 -- host/digest.sh@80 -- # rw=randwrite 00:22:21.230 16:41:58 -- host/digest.sh@80 -- # bs=4096 00:22:21.230 16:41:58 -- host/digest.sh@80 -- # qd=128 00:22:21.230 16:41:58 -- host/digest.sh@82 -- # bperfpid=97692 00:22:21.230 16:41:58 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:22:21.230 16:41:58 -- host/digest.sh@83 -- # waitforlisten 97692 /var/tmp/bperf.sock 00:22:21.230 16:41:58 -- common/autotest_common.sh@829 -- # '[' -z 97692 ']' 00:22:21.230 16:41:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:21.230 16:41:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:21.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:21.230 16:41:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:21.230 16:41:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:21.230 16:41:58 -- common/autotest_common.sh@10 -- # set +x 00:22:21.230 [2024-11-16 16:41:58.704701] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:21.230 [2024-11-16 16:41:58.704785] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97692 ] 00:22:21.489 [2024-11-16 16:41:58.837633] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:21.489 [2024-11-16 16:41:58.901761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:22.425 16:41:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:22.425 16:41:59 -- common/autotest_common.sh@862 -- # return 0 00:22:22.425 16:41:59 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:22.425 16:41:59 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:22.425 16:41:59 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:22.425 16:41:59 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:22.425 16:41:59 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:23.016 nvme0n1 00:22:23.016 16:42:00 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:23.016 16:42:00 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:23.016 Running I/O for 2 seconds... 00:22:24.963 00:22:24.963 Latency(us) 00:22:24.963 [2024-11-16T16:42:02.454Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:24.963 [2024-11-16T16:42:02.454Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:24.963 nvme0n1 : 2.01 28165.14 110.02 0.00 0.00 4539.77 1861.82 9175.04 00:22:24.963 [2024-11-16T16:42:02.454Z] =================================================================================================================== 00:22:24.963 [2024-11-16T16:42:02.454Z] Total : 28165.14 110.02 0.00 0.00 4539.77 1861.82 9175.04 00:22:24.963 0 00:22:24.963 16:42:02 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:24.963 16:42:02 -- host/digest.sh@92 -- # get_accel_stats 00:22:24.963 16:42:02 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:24.963 16:42:02 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:24.963 16:42:02 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:24.963 | select(.opcode=="crc32c") 00:22:24.963 | "\(.module_name) \(.executed)"' 00:22:25.222 16:42:02 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:25.222 16:42:02 -- host/digest.sh@93 -- # exp_module=software 00:22:25.222 16:42:02 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:25.222 16:42:02 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:25.222 16:42:02 -- host/digest.sh@97 -- # killprocess 97692 00:22:25.222 16:42:02 -- common/autotest_common.sh@936 -- # '[' -z 97692 ']' 00:22:25.222 16:42:02 -- common/autotest_common.sh@940 -- # kill -0 97692 00:22:25.222 16:42:02 -- common/autotest_common.sh@941 -- # uname 00:22:25.222 16:42:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:25.222 16:42:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97692 00:22:25.222 16:42:02 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:25.222 16:42:02 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:25.222 killing process with pid 97692 00:22:25.222 16:42:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97692' 00:22:25.222 16:42:02 -- common/autotest_common.sh@955 -- # kill 97692 00:22:25.222 Received shutdown signal, test time was about 2.000000 seconds 00:22:25.222 00:22:25.222 Latency(us) 00:22:25.222 [2024-11-16T16:42:02.713Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:25.222 [2024-11-16T16:42:02.713Z] =================================================================================================================== 00:22:25.222 [2024-11-16T16:42:02.713Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:25.222 16:42:02 -- common/autotest_common.sh@960 -- # wait 97692 00:22:25.481 16:42:02 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:22:25.481 16:42:02 -- host/digest.sh@77 -- # local rw bs qd 00:22:25.481 16:42:02 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:25.481 16:42:02 -- host/digest.sh@80 -- # rw=randwrite 00:22:25.481 16:42:02 -- host/digest.sh@80 -- # bs=131072 00:22:25.481 16:42:02 -- host/digest.sh@80 -- # qd=16 00:22:25.481 16:42:02 -- host/digest.sh@82 -- # bperfpid=97782 00:22:25.481 16:42:02 -- host/digest.sh@83 -- # waitforlisten 97782 /var/tmp/bperf.sock 00:22:25.481 16:42:02 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:22:25.481 16:42:02 -- common/autotest_common.sh@829 -- # '[' -z 97782 ']' 00:22:25.481 16:42:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:25.481 16:42:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:25.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:25.481 16:42:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:25.481 16:42:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:25.481 16:42:02 -- common/autotest_common.sh@10 -- # set +x 00:22:25.481 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:25.481 Zero copy mechanism will not be used. 00:22:25.481 [2024-11-16 16:42:02.861706] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:25.481 [2024-11-16 16:42:02.861806] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97782 ] 00:22:25.739 [2024-11-16 16:42:03.000936] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:25.739 [2024-11-16 16:42:03.065306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:26.307 16:42:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:26.307 16:42:03 -- common/autotest_common.sh@862 -- # return 0 00:22:26.307 16:42:03 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:26.307 16:42:03 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:26.307 16:42:03 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:26.874 16:42:04 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:26.874 16:42:04 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:26.874 nvme0n1 00:22:26.874 16:42:04 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:26.874 16:42:04 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:27.133 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:27.133 Zero copy mechanism will not be used. 00:22:27.133 Running I/O for 2 seconds... 00:22:29.037 00:22:29.037 Latency(us) 00:22:29.037 [2024-11-16T16:42:06.528Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:29.037 [2024-11-16T16:42:06.528Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:22:29.037 nvme0n1 : 2.00 8243.25 1030.41 0.00 0.00 1936.80 1608.61 6851.49 00:22:29.037 [2024-11-16T16:42:06.528Z] =================================================================================================================== 00:22:29.037 [2024-11-16T16:42:06.528Z] Total : 8243.25 1030.41 0.00 0.00 1936.80 1608.61 6851.49 00:22:29.037 0 00:22:29.037 16:42:06 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:29.037 16:42:06 -- host/digest.sh@92 -- # get_accel_stats 00:22:29.037 16:42:06 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:29.037 16:42:06 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:29.037 16:42:06 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:29.037 | select(.opcode=="crc32c") 00:22:29.037 | "\(.module_name) \(.executed)"' 00:22:29.296 16:42:06 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:29.296 16:42:06 -- host/digest.sh@93 -- # exp_module=software 00:22:29.296 16:42:06 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:29.296 16:42:06 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:29.296 16:42:06 -- host/digest.sh@97 -- # killprocess 97782 00:22:29.296 16:42:06 -- common/autotest_common.sh@936 -- # '[' -z 97782 ']' 00:22:29.296 16:42:06 -- common/autotest_common.sh@940 -- # kill -0 97782 00:22:29.296 16:42:06 -- common/autotest_common.sh@941 -- # uname 00:22:29.296 16:42:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:29.296 16:42:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97782 00:22:29.296 16:42:06 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:29.296 16:42:06 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:29.296 killing process with pid 97782 00:22:29.296 16:42:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97782' 00:22:29.296 16:42:06 -- common/autotest_common.sh@955 -- # kill 97782 00:22:29.296 Received shutdown signal, test time was about 2.000000 seconds 00:22:29.296 00:22:29.296 Latency(us) 00:22:29.296 [2024-11-16T16:42:06.787Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:29.296 [2024-11-16T16:42:06.787Z] =================================================================================================================== 00:22:29.296 [2024-11-16T16:42:06.787Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:29.296 16:42:06 -- common/autotest_common.sh@960 -- # wait 97782 00:22:29.555 16:42:06 -- host/digest.sh@126 -- # killprocess 97461 00:22:29.555 16:42:06 -- common/autotest_common.sh@936 -- # '[' -z 97461 ']' 00:22:29.555 16:42:06 -- common/autotest_common.sh@940 -- # kill -0 97461 00:22:29.555 16:42:06 -- common/autotest_common.sh@941 -- # uname 00:22:29.555 16:42:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:29.555 16:42:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97461 00:22:29.555 16:42:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:29.555 16:42:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:29.555 killing process with pid 97461 00:22:29.555 16:42:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97461' 00:22:29.555 16:42:06 -- common/autotest_common.sh@955 -- # kill 97461 00:22:29.555 16:42:06 -- common/autotest_common.sh@960 -- # wait 97461 00:22:29.814 00:22:29.814 real 0m18.275s 00:22:29.814 user 0m33.320s 00:22:29.814 sys 0m5.336s 00:22:29.814 16:42:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:29.814 ************************************ 00:22:29.814 END TEST nvmf_digest_clean 00:22:29.814 ************************************ 00:22:29.814 16:42:07 -- common/autotest_common.sh@10 -- # set +x 00:22:29.814 16:42:07 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:22:29.814 16:42:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:29.814 16:42:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:29.814 16:42:07 -- common/autotest_common.sh@10 -- # set +x 00:22:29.814 ************************************ 00:22:29.814 START TEST nvmf_digest_error 00:22:29.814 ************************************ 00:22:29.814 16:42:07 -- common/autotest_common.sh@1114 -- # run_digest_error 00:22:29.814 16:42:07 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:22:29.814 16:42:07 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:29.814 16:42:07 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:29.814 16:42:07 -- common/autotest_common.sh@10 -- # set +x 00:22:30.074 16:42:07 -- nvmf/common.sh@469 -- # nvmfpid=97891 00:22:30.074 16:42:07 -- nvmf/common.sh@470 -- # waitforlisten 97891 00:22:30.074 16:42:07 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:30.074 16:42:07 -- common/autotest_common.sh@829 -- # '[' -z 97891 ']' 00:22:30.074 16:42:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:30.074 16:42:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:30.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:30.074 16:42:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:30.074 16:42:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:30.074 16:42:07 -- common/autotest_common.sh@10 -- # set +x 00:22:30.074 [2024-11-16 16:42:07.366123] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:30.074 [2024-11-16 16:42:07.366224] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:30.074 [2024-11-16 16:42:07.505813] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:30.333 [2024-11-16 16:42:07.565561] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:30.333 [2024-11-16 16:42:07.565697] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:30.333 [2024-11-16 16:42:07.565708] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:30.333 [2024-11-16 16:42:07.565716] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:30.333 [2024-11-16 16:42:07.565748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:30.901 16:42:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:30.901 16:42:08 -- common/autotest_common.sh@862 -- # return 0 00:22:30.901 16:42:08 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:30.901 16:42:08 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:30.901 16:42:08 -- common/autotest_common.sh@10 -- # set +x 00:22:30.901 16:42:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:30.901 16:42:08 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:22:30.901 16:42:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.901 16:42:08 -- common/autotest_common.sh@10 -- # set +x 00:22:30.901 [2024-11-16 16:42:08.374283] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:22:30.901 16:42:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.901 16:42:08 -- host/digest.sh@104 -- # common_target_config 00:22:30.901 16:42:08 -- host/digest.sh@43 -- # rpc_cmd 00:22:30.901 16:42:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.901 16:42:08 -- common/autotest_common.sh@10 -- # set +x 00:22:31.160 null0 00:22:31.160 [2024-11-16 16:42:08.506183] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:31.160 [2024-11-16 16:42:08.530336] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:31.160 16:42:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.160 16:42:08 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:22:31.160 16:42:08 -- host/digest.sh@54 -- # local rw bs qd 00:22:31.160 16:42:08 -- host/digest.sh@56 -- # rw=randread 00:22:31.160 16:42:08 -- host/digest.sh@56 -- # bs=4096 00:22:31.160 16:42:08 -- host/digest.sh@56 -- # qd=128 00:22:31.160 16:42:08 -- host/digest.sh@58 -- # bperfpid=97941 00:22:31.160 16:42:08 -- host/digest.sh@60 -- # waitforlisten 97941 /var/tmp/bperf.sock 00:22:31.160 16:42:08 -- common/autotest_common.sh@829 -- # '[' -z 97941 ']' 00:22:31.160 16:42:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:31.160 16:42:08 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:22:31.160 16:42:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:31.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:31.160 16:42:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:31.160 16:42:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:31.160 16:42:08 -- common/autotest_common.sh@10 -- # set +x 00:22:31.160 [2024-11-16 16:42:08.588107] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:31.160 [2024-11-16 16:42:08.588197] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97941 ] 00:22:31.418 [2024-11-16 16:42:08.729648] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:31.418 [2024-11-16 16:42:08.803794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:32.350 16:42:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:32.350 16:42:09 -- common/autotest_common.sh@862 -- # return 0 00:22:32.350 16:42:09 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:32.351 16:42:09 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:32.351 16:42:09 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:32.351 16:42:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.351 16:42:09 -- common/autotest_common.sh@10 -- # set +x 00:22:32.351 16:42:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.351 16:42:09 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:32.351 16:42:09 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:32.917 nvme0n1 00:22:32.917 16:42:10 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:22:32.917 16:42:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.917 16:42:10 -- common/autotest_common.sh@10 -- # set +x 00:22:32.917 16:42:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.917 16:42:10 -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:32.917 16:42:10 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:32.917 Running I/O for 2 seconds... 00:22:32.917 [2024-11-16 16:42:10.280796] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:32.917 [2024-11-16 16:42:10.280842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:2811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.917 [2024-11-16 16:42:10.280871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.917 [2024-11-16 16:42:10.291852] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:32.917 [2024-11-16 16:42:10.291888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:14292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.917 [2024-11-16 16:42:10.291917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.917 [2024-11-16 16:42:10.301250] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:32.917 [2024-11-16 16:42:10.301301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.917 [2024-11-16 16:42:10.301330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.917 [2024-11-16 16:42:10.312781] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:32.917 [2024-11-16 16:42:10.312815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.917 [2024-11-16 16:42:10.312843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.917 [2024-11-16 16:42:10.325123] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:32.917 [2024-11-16 16:42:10.325156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:23683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.917 [2024-11-16 16:42:10.325207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.917 [2024-11-16 16:42:10.334329] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:32.917 [2024-11-16 16:42:10.334381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:13340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.917 [2024-11-16 16:42:10.334409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.917 [2024-11-16 16:42:10.345698] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:32.917 [2024-11-16 16:42:10.345732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.917 [2024-11-16 16:42:10.345760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.917 [2024-11-16 16:42:10.356850] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:32.917 [2024-11-16 16:42:10.356886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:15213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.917 [2024-11-16 16:42:10.356914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.917 [2024-11-16 16:42:10.367144] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:32.917 [2024-11-16 16:42:10.367179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:19251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.917 [2024-11-16 16:42:10.367207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.917 [2024-11-16 16:42:10.378658] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:32.917 [2024-11-16 16:42:10.378693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:14502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.917 [2024-11-16 16:42:10.378720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.917 [2024-11-16 16:42:10.390192] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:32.917 [2024-11-16 16:42:10.390240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:20834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.918 [2024-11-16 16:42:10.390268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:32.918 [2024-11-16 16:42:10.402822] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:32.918 [2024-11-16 16:42:10.402876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.918 [2024-11-16 16:42:10.402904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.176 [2024-11-16 16:42:10.412776] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.176 [2024-11-16 16:42:10.412813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:23360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.176 [2024-11-16 16:42:10.412841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.176 [2024-11-16 16:42:10.424518] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.176 [2024-11-16 16:42:10.424553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.176 [2024-11-16 16:42:10.424580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.176 [2024-11-16 16:42:10.436985] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.176 [2024-11-16 16:42:10.437019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:11372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.176 [2024-11-16 16:42:10.437047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.176 [2024-11-16 16:42:10.450307] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.176 [2024-11-16 16:42:10.450360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:9422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.177 [2024-11-16 16:42:10.450372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.177 [2024-11-16 16:42:10.462128] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.177 [2024-11-16 16:42:10.462192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:19144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.177 [2024-11-16 16:42:10.462205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.177 [2024-11-16 16:42:10.471533] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.177 [2024-11-16 16:42:10.471568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:15466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.177 [2024-11-16 16:42:10.471595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.177 [2024-11-16 16:42:10.481664] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.177 [2024-11-16 16:42:10.481717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:2463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.177 [2024-11-16 16:42:10.481745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.177 [2024-11-16 16:42:10.491116] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.177 [2024-11-16 16:42:10.491151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:4331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.177 [2024-11-16 16:42:10.491178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.177 [2024-11-16 16:42:10.500294] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.177 [2024-11-16 16:42:10.500343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:14303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.177 [2024-11-16 16:42:10.500372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.177 [2024-11-16 16:42:10.510621] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.177 [2024-11-16 16:42:10.510655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:7214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.177 [2024-11-16 16:42:10.510683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.177 [2024-11-16 16:42:10.520415] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.177 [2024-11-16 16:42:10.520483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:21826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.177 [2024-11-16 16:42:10.520510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.177 [2024-11-16 16:42:10.530085] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.177 [2024-11-16 16:42:10.530144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:11548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.177 [2024-11-16 16:42:10.530172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.177 [2024-11-16 16:42:10.541605] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.177 [2024-11-16 16:42:10.541639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:5764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.177 [2024-11-16 16:42:10.541667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.177 [2024-11-16 16:42:10.553983] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.177 [2024-11-16 16:42:10.554017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:23884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.177 [2024-11-16 16:42:10.554044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.177 [2024-11-16 16:42:10.562394] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.177 [2024-11-16 16:42:10.562428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.177 [2024-11-16 16:42:10.562455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.177 [2024-11-16 16:42:10.573927] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.177 [2024-11-16 16:42:10.573961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:22982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.177 [2024-11-16 16:42:10.573989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.177 [2024-11-16 16:42:10.586328] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.177 [2024-11-16 16:42:10.586362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:4778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.177 [2024-11-16 16:42:10.586389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.177 [2024-11-16 16:42:10.595643] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.177 [2024-11-16 16:42:10.595678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:10913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.177 [2024-11-16 16:42:10.595705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.177 [2024-11-16 16:42:10.605231] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.177 [2024-11-16 16:42:10.605281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.177 [2024-11-16 16:42:10.605309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.177 [2024-11-16 16:42:10.615534] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.177 [2024-11-16 16:42:10.615569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:10688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.177 [2024-11-16 16:42:10.615597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.177 [2024-11-16 16:42:10.624374] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.177 [2024-11-16 16:42:10.624422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:13876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.177 [2024-11-16 16:42:10.624450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.177 [2024-11-16 16:42:10.636913] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.177 [2024-11-16 16:42:10.636948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.177 [2024-11-16 16:42:10.636976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.177 [2024-11-16 16:42:10.648864] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.177 [2024-11-16 16:42:10.648899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:12221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.177 [2024-11-16 16:42:10.648927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.177 [2024-11-16 16:42:10.660407] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.177 [2024-11-16 16:42:10.660457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.177 [2024-11-16 16:42:10.660469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.437 [2024-11-16 16:42:10.670664] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.437 [2024-11-16 16:42:10.670717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:5872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.437 [2024-11-16 16:42:10.670745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.437 [2024-11-16 16:42:10.680154] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.437 [2024-11-16 16:42:10.680204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:11925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.437 [2024-11-16 16:42:10.680232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.437 [2024-11-16 16:42:10.689650] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.437 [2024-11-16 16:42:10.689685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:18625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.437 [2024-11-16 16:42:10.689713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.437 [2024-11-16 16:42:10.697909] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.437 [2024-11-16 16:42:10.697944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.437 [2024-11-16 16:42:10.697971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.437 [2024-11-16 16:42:10.709432] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.437 [2024-11-16 16:42:10.709485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:15522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.437 [2024-11-16 16:42:10.709527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.437 [2024-11-16 16:42:10.720919] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.437 [2024-11-16 16:42:10.720954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:16927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.437 [2024-11-16 16:42:10.720982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.437 [2024-11-16 16:42:10.732068] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.437 [2024-11-16 16:42:10.732115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:14261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.437 [2024-11-16 16:42:10.732143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.437 [2024-11-16 16:42:10.744767] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.437 [2024-11-16 16:42:10.744801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.437 [2024-11-16 16:42:10.744829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.437 [2024-11-16 16:42:10.757032] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.437 [2024-11-16 16:42:10.757079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:15099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.437 [2024-11-16 16:42:10.757108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.437 [2024-11-16 16:42:10.765566] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.437 [2024-11-16 16:42:10.765631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:6070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.437 [2024-11-16 16:42:10.765658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.437 [2024-11-16 16:42:10.777297] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.437 [2024-11-16 16:42:10.777347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:1155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.437 [2024-11-16 16:42:10.777375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.437 [2024-11-16 16:42:10.788386] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.437 [2024-11-16 16:42:10.788436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:10434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.437 [2024-11-16 16:42:10.788480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.437 [2024-11-16 16:42:10.798597] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.437 [2024-11-16 16:42:10.798631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:15260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.437 [2024-11-16 16:42:10.798658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.437 [2024-11-16 16:42:10.809798] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.437 [2024-11-16 16:42:10.809832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:12647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.437 [2024-11-16 16:42:10.809859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.437 [2024-11-16 16:42:10.818702] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.437 [2024-11-16 16:42:10.818736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:18436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.437 [2024-11-16 16:42:10.818764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.437 [2024-11-16 16:42:10.831030] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.437 [2024-11-16 16:42:10.831073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:15644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.437 [2024-11-16 16:42:10.831102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.437 [2024-11-16 16:42:10.842098] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.437 [2024-11-16 16:42:10.842132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.437 [2024-11-16 16:42:10.842158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.437 [2024-11-16 16:42:10.851369] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.437 [2024-11-16 16:42:10.851403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:3248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.437 [2024-11-16 16:42:10.851430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.437 [2024-11-16 16:42:10.860550] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.437 [2024-11-16 16:42:10.860584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:5634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.437 [2024-11-16 16:42:10.860611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.437 [2024-11-16 16:42:10.871193] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.437 [2024-11-16 16:42:10.871243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:1270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.437 [2024-11-16 16:42:10.871286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.437 [2024-11-16 16:42:10.881791] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.437 [2024-11-16 16:42:10.881824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.437 [2024-11-16 16:42:10.881852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.437 [2024-11-16 16:42:10.890721] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.437 [2024-11-16 16:42:10.890756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:9342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.437 [2024-11-16 16:42:10.890784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.437 [2024-11-16 16:42:10.902935] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.437 [2024-11-16 16:42:10.902970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.437 [2024-11-16 16:42:10.902997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.437 [2024-11-16 16:42:10.914829] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.437 [2024-11-16 16:42:10.914864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:10295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.438 [2024-11-16 16:42:10.914892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.438 [2024-11-16 16:42:10.925005] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.438 [2024-11-16 16:42:10.925043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.438 [2024-11-16 16:42:10.925083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.696 [2024-11-16 16:42:10.935081] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.697 [2024-11-16 16:42:10.935117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:19246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.697 [2024-11-16 16:42:10.935144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.697 [2024-11-16 16:42:10.944808] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.697 [2024-11-16 16:42:10.944842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:4936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.697 [2024-11-16 16:42:10.944871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.697 [2024-11-16 16:42:10.955261] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.697 [2024-11-16 16:42:10.955296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:12596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.697 [2024-11-16 16:42:10.955323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.697 [2024-11-16 16:42:10.967985] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.697 [2024-11-16 16:42:10.968020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:4451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.697 [2024-11-16 16:42:10.968048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.697 [2024-11-16 16:42:10.980388] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.697 [2024-11-16 16:42:10.980437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:12432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.697 [2024-11-16 16:42:10.980482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.697 [2024-11-16 16:42:10.988518] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.697 [2024-11-16 16:42:10.988552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:2773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.697 [2024-11-16 16:42:10.988580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.697 [2024-11-16 16:42:11.000892] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.697 [2024-11-16 16:42:11.000926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:2736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.697 [2024-11-16 16:42:11.000954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.697 [2024-11-16 16:42:11.012676] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.697 [2024-11-16 16:42:11.012712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.697 [2024-11-16 16:42:11.012739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.697 [2024-11-16 16:42:11.021913] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.697 [2024-11-16 16:42:11.021947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:11230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.697 [2024-11-16 16:42:11.021974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.697 [2024-11-16 16:42:11.032106] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.697 [2024-11-16 16:42:11.032155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.697 [2024-11-16 16:42:11.032183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.697 [2024-11-16 16:42:11.043948] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.697 [2024-11-16 16:42:11.043983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:6628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.697 [2024-11-16 16:42:11.044010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.697 [2024-11-16 16:42:11.055952] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.697 [2024-11-16 16:42:11.055986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.697 [2024-11-16 16:42:11.056014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.697 [2024-11-16 16:42:11.067665] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.697 [2024-11-16 16:42:11.067700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:6599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.697 [2024-11-16 16:42:11.067727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.697 [2024-11-16 16:42:11.076144] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.697 [2024-11-16 16:42:11.076191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:12143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.697 [2024-11-16 16:42:11.076220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.697 [2024-11-16 16:42:11.088534] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.697 [2024-11-16 16:42:11.088573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:5235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.697 [2024-11-16 16:42:11.088586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.697 [2024-11-16 16:42:11.098652] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.697 [2024-11-16 16:42:11.098688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.697 [2024-11-16 16:42:11.098716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.697 [2024-11-16 16:42:11.110328] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.697 [2024-11-16 16:42:11.110379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:7786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.697 [2024-11-16 16:42:11.110421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.697 [2024-11-16 16:42:11.123272] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.697 [2024-11-16 16:42:11.123322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:13408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.697 [2024-11-16 16:42:11.123349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.697 [2024-11-16 16:42:11.132552] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.697 [2024-11-16 16:42:11.132602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:14964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.697 [2024-11-16 16:42:11.132629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.697 [2024-11-16 16:42:11.141921] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.697 [2024-11-16 16:42:11.141970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.697 [2024-11-16 16:42:11.141998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.697 [2024-11-16 16:42:11.151403] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.697 [2024-11-16 16:42:11.151452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:18259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.697 [2024-11-16 16:42:11.151480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.698 [2024-11-16 16:42:11.159710] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.698 [2024-11-16 16:42:11.159760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.698 [2024-11-16 16:42:11.159787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.698 [2024-11-16 16:42:11.171401] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.698 [2024-11-16 16:42:11.171451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:14444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.698 [2024-11-16 16:42:11.171478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.698 [2024-11-16 16:42:11.184391] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.698 [2024-11-16 16:42:11.184433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.698 [2024-11-16 16:42:11.184446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.956 [2024-11-16 16:42:11.196321] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.956 [2024-11-16 16:42:11.196373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.956 [2024-11-16 16:42:11.196401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.956 [2024-11-16 16:42:11.208356] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.956 [2024-11-16 16:42:11.208406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.956 [2024-11-16 16:42:11.208434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.956 [2024-11-16 16:42:11.220112] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.956 [2024-11-16 16:42:11.220162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:7505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.956 [2024-11-16 16:42:11.220190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.956 [2024-11-16 16:42:11.229363] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.956 [2024-11-16 16:42:11.229415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:6030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.956 [2024-11-16 16:42:11.229444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.956 [2024-11-16 16:42:11.240379] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.957 [2024-11-16 16:42:11.240429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.957 [2024-11-16 16:42:11.240457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.957 [2024-11-16 16:42:11.252238] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.957 [2024-11-16 16:42:11.252289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:3267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.957 [2024-11-16 16:42:11.252317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.957 [2024-11-16 16:42:11.264296] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.957 [2024-11-16 16:42:11.264346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:19592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.957 [2024-11-16 16:42:11.264374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.957 [2024-11-16 16:42:11.276966] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.957 [2024-11-16 16:42:11.277016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.957 [2024-11-16 16:42:11.277043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.957 [2024-11-16 16:42:11.286471] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.957 [2024-11-16 16:42:11.286521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:10954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.957 [2024-11-16 16:42:11.286548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.957 [2024-11-16 16:42:11.299462] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.957 [2024-11-16 16:42:11.299497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.957 [2024-11-16 16:42:11.299525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.957 [2024-11-16 16:42:11.309786] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.957 [2024-11-16 16:42:11.309820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:6118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.957 [2024-11-16 16:42:11.309847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.957 [2024-11-16 16:42:11.320347] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.957 [2024-11-16 16:42:11.320381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:8074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.957 [2024-11-16 16:42:11.320408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.957 [2024-11-16 16:42:11.333272] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.957 [2024-11-16 16:42:11.333322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:5907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.957 [2024-11-16 16:42:11.333352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.957 [2024-11-16 16:42:11.344636] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.957 [2024-11-16 16:42:11.344671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:9620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.957 [2024-11-16 16:42:11.344698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.957 [2024-11-16 16:42:11.353869] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.957 [2024-11-16 16:42:11.353902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.957 [2024-11-16 16:42:11.353930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.957 [2024-11-16 16:42:11.364356] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.957 [2024-11-16 16:42:11.364390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.957 [2024-11-16 16:42:11.364417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.957 [2024-11-16 16:42:11.374835] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.957 [2024-11-16 16:42:11.374870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.957 [2024-11-16 16:42:11.374898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.957 [2024-11-16 16:42:11.383281] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.957 [2024-11-16 16:42:11.383314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:15 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.957 [2024-11-16 16:42:11.383341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.957 [2024-11-16 16:42:11.392526] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.957 [2024-11-16 16:42:11.392574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:8862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.957 [2024-11-16 16:42:11.392602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.957 [2024-11-16 16:42:11.401399] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.957 [2024-11-16 16:42:11.401451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:17571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.957 [2024-11-16 16:42:11.401479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.957 [2024-11-16 16:42:11.411360] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.957 [2024-11-16 16:42:11.411395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:21151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.957 [2024-11-16 16:42:11.411422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.957 [2024-11-16 16:42:11.420476] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.957 [2024-11-16 16:42:11.420510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:23440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.957 [2024-11-16 16:42:11.420537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.957 [2024-11-16 16:42:11.429509] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.957 [2024-11-16 16:42:11.429575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:25555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.957 [2024-11-16 16:42:11.429602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.957 [2024-11-16 16:42:11.438083] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:33.957 [2024-11-16 16:42:11.438116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:2230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.957 [2024-11-16 16:42:11.438144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.216 [2024-11-16 16:42:11.450299] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:34.216 [2024-11-16 16:42:11.450337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:3501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.216 [2024-11-16 16:42:11.450365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.216 [2024-11-16 16:42:11.462199] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:34.216 [2024-11-16 16:42:11.462255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:5893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.216 [2024-11-16 16:42:11.462268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.216 [2024-11-16 16:42:11.474326] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:34.216 [2024-11-16 16:42:11.474379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.216 [2024-11-16 16:42:11.474392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.216 [2024-11-16 16:42:11.483386] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:34.216 [2024-11-16 16:42:11.483453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:8361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.216 [2024-11-16 16:42:11.483481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.216 [2024-11-16 16:42:11.493481] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:34.216 [2024-11-16 16:42:11.493547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.216 [2024-11-16 16:42:11.493575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.216 [2024-11-16 16:42:11.504386] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:34.216 [2024-11-16 16:42:11.504420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:4675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.216 [2024-11-16 16:42:11.504448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.216 [2024-11-16 16:42:11.515840] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:34.216 [2024-11-16 16:42:11.515875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:3548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.216 [2024-11-16 16:42:11.515903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.216 [2024-11-16 16:42:11.525022] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:34.216 [2024-11-16 16:42:11.525082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:5390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.216 [2024-11-16 16:42:11.525095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.216 [2024-11-16 16:42:11.535478] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:34.216 [2024-11-16 16:42:11.535513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.216 [2024-11-16 16:42:11.535540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.216 [2024-11-16 16:42:11.548213] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:34.216 [2024-11-16 16:42:11.548248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:8215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.216 [2024-11-16 16:42:11.548276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.216 [2024-11-16 16:42:11.557132] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:34.216 [2024-11-16 16:42:11.557166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.216 [2024-11-16 16:42:11.557216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.217 [2024-11-16 16:42:11.566904] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:34.217 [2024-11-16 16:42:11.566939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:9318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.217 [2024-11-16 16:42:11.566965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.217 [2024-11-16 16:42:11.577864] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:34.217 [2024-11-16 16:42:11.577899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:13548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.217 [2024-11-16 16:42:11.577926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.217 [2024-11-16 16:42:11.588091] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:34.217 [2024-11-16 16:42:11.588124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.217 [2024-11-16 16:42:11.588151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.217 [2024-11-16 16:42:11.599389] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:34.217 [2024-11-16 16:42:11.599423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:16823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.217 [2024-11-16 16:42:11.599451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.217 [2024-11-16 16:42:11.611714] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:34.217 [2024-11-16 16:42:11.611750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:10532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.217 [2024-11-16 16:42:11.611778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.217 [2024-11-16 16:42:11.623864] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:34.217 [2024-11-16 16:42:11.623899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:7285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.217 [2024-11-16 16:42:11.623927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.217 [2024-11-16 16:42:11.636687] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:34.217 [2024-11-16 16:42:11.636722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:5457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.217 [2024-11-16 16:42:11.636749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.217 [2024-11-16 16:42:11.648829] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:34.217 [2024-11-16 16:42:11.648864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:21719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.217 [2024-11-16 16:42:11.648892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.217 [2024-11-16 16:42:11.656859] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:34.217 [2024-11-16 16:42:11.656894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:1959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.217 [2024-11-16 16:42:11.656921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.217 [2024-11-16 16:42:11.669017] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:34.217 [2024-11-16 16:42:11.669051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.217 [2024-11-16 16:42:11.669090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.217 [2024-11-16 16:42:11.680790] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:34.217 [2024-11-16 16:42:11.680825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.217 [2024-11-16 16:42:11.680854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.217 [2024-11-16 16:42:11.692776] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:34.217 [2024-11-16 16:42:11.692810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:5200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.217 [2024-11-16 16:42:11.692838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.475 [2024-11-16 16:42:11.705759] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:34.475 [2024-11-16 16:42:11.705815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:8258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.476 [2024-11-16 16:42:11.705844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.476 [2024-11-16 16:42:11.716935] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:34.476 [2024-11-16 16:42:11.716972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:8924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.476 [2024-11-16 16:42:11.717000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.476 [2024-11-16 16:42:11.726155] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:34.476 [2024-11-16 16:42:11.726215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:15296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.476 [2024-11-16 16:42:11.726244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.476 [2024-11-16 16:42:11.736936] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:34.476 [2024-11-16 16:42:11.736972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.476 [2024-11-16 16:42:11.736999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.476 [2024-11-16 16:42:11.745998] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:34.476 [2024-11-16 16:42:11.746048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:22838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.476 [2024-11-16 16:42:11.746086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.476 [2024-11-16 16:42:11.757176] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:34.476 [2024-11-16 16:42:11.757250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:6053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.476 [2024-11-16 16:42:11.757278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.476 [2024-11-16 16:42:11.768572] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:34.476 [2024-11-16 16:42:11.768606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:4889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.476 [2024-11-16 16:42:11.768634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.476 [2024-11-16 16:42:11.780725] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:34.476 [2024-11-16 16:42:11.780760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.476 [2024-11-16 16:42:11.780788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.476 [2024-11-16 16:42:11.790987] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:34.476 [2024-11-16 16:42:11.791021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:15543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.476 [2024-11-16 16:42:11.791048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.476 [2024-11-16 16:42:11.799879] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:34.476 [2024-11-16 16:42:11.799914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:4224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.476 [2024-11-16 16:42:11.799941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.476 [2024-11-16 16:42:11.810067] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:34.476 [2024-11-16 16:42:11.810110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:9072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.476 [2024-11-16 16:42:11.810137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.476 [2024-11-16 16:42:11.820470] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:34.476 [2024-11-16 16:42:11.820504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:1815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.476 [2024-11-16 16:42:11.820531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.476 [2024-11-16 16:42:11.829814] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:34.476 [2024-11-16 16:42:11.829848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:10978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.476 [2024-11-16 16:42:11.829876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.476 [2024-11-16 16:42:11.841467] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:34.476 [2024-11-16 16:42:11.841518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:5624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.476 [2024-11-16 16:42:11.841546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.476 [2024-11-16 16:42:11.853390] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:34.476 [2024-11-16 16:42:11.853441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:2077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.476 [2024-11-16 16:42:11.853469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.476 [2024-11-16 16:42:11.865873] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:34.476 [2024-11-16 16:42:11.865908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:7275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.476 [2024-11-16 16:42:11.865935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.476 [2024-11-16 16:42:11.875882] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:34.476 [2024-11-16 16:42:11.875917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:17356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.476 [2024-11-16 16:42:11.875945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.476 [2024-11-16 16:42:11.884762] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:34.476 [2024-11-16 16:42:11.884797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:13325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.476 [2024-11-16 16:42:11.884825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.476 [2024-11-16 16:42:11.894765] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:34.476 [2024-11-16 16:42:11.894798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:14187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.476 [2024-11-16 16:42:11.894825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.476 [2024-11-16 16:42:11.903811] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:34.476 [2024-11-16 16:42:11.903846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.476 [2024-11-16 16:42:11.903873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.476 [2024-11-16 16:42:11.913653] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:34.476 [2024-11-16 16:42:11.913686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:3445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.476 [2024-11-16 16:42:11.913713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.476 [2024-11-16 16:42:11.925168] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:34.476 [2024-11-16 16:42:11.925223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:1066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.476 [2024-11-16 16:42:11.925251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.476 [2024-11-16 16:42:11.936838] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:34.476 [2024-11-16 16:42:11.936875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:19604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.476 [2024-11-16 16:42:11.936903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.476 [2024-11-16 16:42:11.948498] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:34.476 [2024-11-16 16:42:11.948534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:11974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.476 [2024-11-16 16:42:11.948560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.476 [2024-11-16 16:42:11.960984] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:34.476 [2024-11-16 16:42:11.961049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:20943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.476 [2024-11-16 16:42:11.961090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.735 [2024-11-16 16:42:11.974507] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:34.735 [2024-11-16 16:42:11.974544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:13102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.735 [2024-11-16 16:42:11.974572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.735 [2024-11-16 16:42:11.982599] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:34.735 [2024-11-16 16:42:11.982635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:4543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.735 [2024-11-16 16:42:11.982664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.735 [2024-11-16 16:42:11.995320] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:34.735 [2024-11-16 16:42:11.995354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:21255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.735 [2024-11-16 16:42:11.995382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.735 [2024-11-16 16:42:12.006729] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:34.735 [2024-11-16 16:42:12.006764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:13500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.735 [2024-11-16 16:42:12.006792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.735 [2024-11-16 16:42:12.017303] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:34.735 [2024-11-16 16:42:12.017354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:2919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.735 [2024-11-16 16:42:12.017382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.735 [2024-11-16 16:42:12.026264] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:34.735 [2024-11-16 16:42:12.026298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:12809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.735 [2024-11-16 16:42:12.026325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.735 [2024-11-16 16:42:12.035900] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:34.735 [2024-11-16 16:42:12.035934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.735 [2024-11-16 16:42:12.035962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.735 [2024-11-16 16:42:12.045768] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:34.735 [2024-11-16 16:42:12.045802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:15884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.735 [2024-11-16 16:42:12.045829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.735 [2024-11-16 16:42:12.096475] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:34.735 [2024-11-16 16:42:12.096509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.735 [2024-11-16 16:42:12.096538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.735 [2024-11-16 16:42:12.106108] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:34.735 [2024-11-16 16:42:12.106142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:2936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.735 [2024-11-16 16:42:12.106169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.735 [2024-11-16 16:42:12.115414] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:34.735 [2024-11-16 16:42:12.115449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:4623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.735 [2024-11-16 16:42:12.115476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.735 [2024-11-16 16:42:12.124627] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:34.735 [2024-11-16 16:42:12.124661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.735 [2024-11-16 16:42:12.124688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.735 [2024-11-16 16:42:12.135129] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:34.735 [2024-11-16 16:42:12.135164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:22491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.735 [2024-11-16 16:42:12.135192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.735 [2024-11-16 16:42:12.144432] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:34.735 [2024-11-16 16:42:12.144466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:14599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.735 [2024-11-16 16:42:12.144493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.735 [2024-11-16 16:42:12.156131] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:34.736 [2024-11-16 16:42:12.156165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:15536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.736 [2024-11-16 16:42:12.156192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.736 [2024-11-16 16:42:12.168536] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:34.736 [2024-11-16 16:42:12.168571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:18126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.736 [2024-11-16 16:42:12.168599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.736 [2024-11-16 16:42:12.177269] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:34.736 [2024-11-16 16:42:12.177324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:17889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.736 [2024-11-16 16:42:12.177337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.736 [2024-11-16 16:42:12.189016] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:34.736 [2024-11-16 16:42:12.189051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:12450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.736 [2024-11-16 16:42:12.189090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.736 [2024-11-16 16:42:12.201854] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:34.736 [2024-11-16 16:42:12.201906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.736 [2024-11-16 16:42:12.201933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.736 [2024-11-16 16:42:12.213422] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:34.736 [2024-11-16 16:42:12.213475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:15250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.736 [2024-11-16 16:42:12.213488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.994 [2024-11-16 16:42:12.225377] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:34.994 [2024-11-16 16:42:12.225435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:8370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.994 [2024-11-16 16:42:12.225449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.994 [2024-11-16 16:42:12.237755] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:34.994 [2024-11-16 16:42:12.237792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:18718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.994 [2024-11-16 16:42:12.237820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.994 [2024-11-16 16:42:12.249296] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:34.994 [2024-11-16 16:42:12.249349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:17177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.994 [2024-11-16 16:42:12.249379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.994 [2024-11-16 16:42:12.261587] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7d68d0) 00:22:34.994 [2024-11-16 16:42:12.261652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:22881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.994 [2024-11-16 16:42:12.261679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.994 00:22:34.994 Latency(us) 00:22:34.994 [2024-11-16T16:42:12.485Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:34.994 [2024-11-16T16:42:12.485Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:34.994 nvme0n1 : 2.01 23082.75 90.17 0.00 0.00 5540.57 2412.92 48854.11 00:22:34.994 [2024-11-16T16:42:12.485Z] =================================================================================================================== 00:22:34.994 [2024-11-16T16:42:12.485Z] Total : 23082.75 90.17 0.00 0.00 5540.57 2412.92 48854.11 00:22:34.994 0 00:22:34.994 16:42:12 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:34.994 16:42:12 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:34.994 16:42:12 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:34.994 16:42:12 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:34.994 | .driver_specific 00:22:34.994 | .nvme_error 00:22:34.994 | .status_code 00:22:34.994 | .command_transient_transport_error' 00:22:35.253 16:42:12 -- host/digest.sh@71 -- # (( 181 > 0 )) 00:22:35.253 16:42:12 -- host/digest.sh@73 -- # killprocess 97941 00:22:35.253 16:42:12 -- common/autotest_common.sh@936 -- # '[' -z 97941 ']' 00:22:35.253 16:42:12 -- common/autotest_common.sh@940 -- # kill -0 97941 00:22:35.253 16:42:12 -- common/autotest_common.sh@941 -- # uname 00:22:35.253 16:42:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:35.253 16:42:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97941 00:22:35.253 16:42:12 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:35.253 16:42:12 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:35.253 killing process with pid 97941 00:22:35.253 16:42:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97941' 00:22:35.253 16:42:12 -- common/autotest_common.sh@955 -- # kill 97941 00:22:35.253 Received shutdown signal, test time was about 2.000000 seconds 00:22:35.253 00:22:35.253 Latency(us) 00:22:35.253 [2024-11-16T16:42:12.744Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:35.253 [2024-11-16T16:42:12.744Z] =================================================================================================================== 00:22:35.253 [2024-11-16T16:42:12.744Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:35.253 16:42:12 -- common/autotest_common.sh@960 -- # wait 97941 00:22:35.511 16:42:12 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:22:35.512 16:42:12 -- host/digest.sh@54 -- # local rw bs qd 00:22:35.512 16:42:12 -- host/digest.sh@56 -- # rw=randread 00:22:35.512 16:42:12 -- host/digest.sh@56 -- # bs=131072 00:22:35.512 16:42:12 -- host/digest.sh@56 -- # qd=16 00:22:35.512 16:42:12 -- host/digest.sh@58 -- # bperfpid=98031 00:22:35.512 16:42:12 -- host/digest.sh@60 -- # waitforlisten 98031 /var/tmp/bperf.sock 00:22:35.512 16:42:12 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:22:35.512 16:42:12 -- common/autotest_common.sh@829 -- # '[' -z 98031 ']' 00:22:35.512 16:42:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:35.512 16:42:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:35.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:35.512 16:42:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:35.512 16:42:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:35.512 16:42:12 -- common/autotest_common.sh@10 -- # set +x 00:22:35.512 [2024-11-16 16:42:12.828926] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:35.512 [2024-11-16 16:42:12.829044] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98031 ] 00:22:35.512 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:35.512 Zero copy mechanism will not be used. 00:22:35.512 [2024-11-16 16:42:12.966291] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:35.770 [2024-11-16 16:42:13.032341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:36.336 16:42:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:36.336 16:42:13 -- common/autotest_common.sh@862 -- # return 0 00:22:36.336 16:42:13 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:36.336 16:42:13 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:36.594 16:42:13 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:36.594 16:42:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.594 16:42:13 -- common/autotest_common.sh@10 -- # set +x 00:22:36.594 16:42:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.594 16:42:13 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:36.594 16:42:13 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:36.852 nvme0n1 00:22:36.852 16:42:14 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:22:36.852 16:42:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.852 16:42:14 -- common/autotest_common.sh@10 -- # set +x 00:22:36.852 16:42:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.852 16:42:14 -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:36.852 16:42:14 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:36.852 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:36.852 Zero copy mechanism will not be used. 00:22:36.852 Running I/O for 2 seconds... 00:22:36.852 [2024-11-16 16:42:14.314542] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:36.852 [2024-11-16 16:42:14.314587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.852 [2024-11-16 16:42:14.314617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.852 [2024-11-16 16:42:14.318548] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:36.852 [2024-11-16 16:42:14.318584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.852 [2024-11-16 16:42:14.318612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.852 [2024-11-16 16:42:14.322271] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:36.852 [2024-11-16 16:42:14.322324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.852 [2024-11-16 16:42:14.322337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.852 [2024-11-16 16:42:14.326004] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:36.852 [2024-11-16 16:42:14.326039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.852 [2024-11-16 16:42:14.326067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.852 [2024-11-16 16:42:14.329792] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:36.852 [2024-11-16 16:42:14.329826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.852 [2024-11-16 16:42:14.329854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.852 [2024-11-16 16:42:14.332973] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:36.852 [2024-11-16 16:42:14.333009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.852 [2024-11-16 16:42:14.333037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.852 [2024-11-16 16:42:14.336981] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:36.853 [2024-11-16 16:42:14.337015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.853 [2024-11-16 16:42:14.337044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.853 [2024-11-16 16:42:14.341446] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:36.853 [2024-11-16 16:42:14.341494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.853 [2024-11-16 16:42:14.341525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.112 [2024-11-16 16:42:14.345980] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.112 [2024-11-16 16:42:14.346020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.112 [2024-11-16 16:42:14.346048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.112 [2024-11-16 16:42:14.350116] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.112 [2024-11-16 16:42:14.350163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.112 [2024-11-16 16:42:14.350191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.112 [2024-11-16 16:42:14.353734] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.112 [2024-11-16 16:42:14.353768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.112 [2024-11-16 16:42:14.353796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.112 [2024-11-16 16:42:14.357519] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.112 [2024-11-16 16:42:14.357587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.112 [2024-11-16 16:42:14.357616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.112 [2024-11-16 16:42:14.361036] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.112 [2024-11-16 16:42:14.361082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.112 [2024-11-16 16:42:14.361110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.112 [2024-11-16 16:42:14.364592] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.112 [2024-11-16 16:42:14.364626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.112 [2024-11-16 16:42:14.364654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.112 [2024-11-16 16:42:14.368478] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.112 [2024-11-16 16:42:14.368513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.112 [2024-11-16 16:42:14.368541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.113 [2024-11-16 16:42:14.371668] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.113 [2024-11-16 16:42:14.371703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.113 [2024-11-16 16:42:14.371732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.113 [2024-11-16 16:42:14.375689] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.113 [2024-11-16 16:42:14.375723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.113 [2024-11-16 16:42:14.375751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.113 [2024-11-16 16:42:14.379703] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.113 [2024-11-16 16:42:14.379738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.113 [2024-11-16 16:42:14.379766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.113 [2024-11-16 16:42:14.383600] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.113 [2024-11-16 16:42:14.383632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.113 [2024-11-16 16:42:14.383659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.113 [2024-11-16 16:42:14.387573] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.113 [2024-11-16 16:42:14.387606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.113 [2024-11-16 16:42:14.387633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.113 [2024-11-16 16:42:14.390702] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.113 [2024-11-16 16:42:14.390736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.113 [2024-11-16 16:42:14.390763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.113 [2024-11-16 16:42:14.394437] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.113 [2024-11-16 16:42:14.394472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.113 [2024-11-16 16:42:14.394499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.113 [2024-11-16 16:42:14.398232] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.113 [2024-11-16 16:42:14.398267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.113 [2024-11-16 16:42:14.398295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.113 [2024-11-16 16:42:14.400823] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.113 [2024-11-16 16:42:14.400858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.113 [2024-11-16 16:42:14.400885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.113 [2024-11-16 16:42:14.405116] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.113 [2024-11-16 16:42:14.405165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.113 [2024-11-16 16:42:14.405217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.113 [2024-11-16 16:42:14.408782] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.113 [2024-11-16 16:42:14.408846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.113 [2024-11-16 16:42:14.408874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.113 [2024-11-16 16:42:14.412704] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.113 [2024-11-16 16:42:14.412739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.113 [2024-11-16 16:42:14.412767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.113 [2024-11-16 16:42:14.416036] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.113 [2024-11-16 16:42:14.416080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.113 [2024-11-16 16:42:14.416108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.113 [2024-11-16 16:42:14.420124] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.113 [2024-11-16 16:42:14.420157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.113 [2024-11-16 16:42:14.420185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.113 [2024-11-16 16:42:14.424064] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.113 [2024-11-16 16:42:14.424097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.113 [2024-11-16 16:42:14.424124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.113 [2024-11-16 16:42:14.427493] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.113 [2024-11-16 16:42:14.427527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.113 [2024-11-16 16:42:14.427554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.113 [2024-11-16 16:42:14.431399] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.113 [2024-11-16 16:42:14.431433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.113 [2024-11-16 16:42:14.431461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.113 [2024-11-16 16:42:14.434739] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.113 [2024-11-16 16:42:14.434774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.113 [2024-11-16 16:42:14.434801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.113 [2024-11-16 16:42:14.438437] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.113 [2024-11-16 16:42:14.438472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.113 [2024-11-16 16:42:14.438500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.113 [2024-11-16 16:42:14.441892] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.113 [2024-11-16 16:42:14.441926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.113 [2024-11-16 16:42:14.441954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.113 [2024-11-16 16:42:14.445336] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.113 [2024-11-16 16:42:14.445373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.113 [2024-11-16 16:42:14.445401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.113 [2024-11-16 16:42:14.449376] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.113 [2024-11-16 16:42:14.449427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.113 [2024-11-16 16:42:14.449455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.113 [2024-11-16 16:42:14.452804] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.113 [2024-11-16 16:42:14.452839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.113 [2024-11-16 16:42:14.452866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.113 [2024-11-16 16:42:14.456566] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.113 [2024-11-16 16:42:14.456600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.113 [2024-11-16 16:42:14.456628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.113 [2024-11-16 16:42:14.459745] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.113 [2024-11-16 16:42:14.459780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.113 [2024-11-16 16:42:14.459808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.113 [2024-11-16 16:42:14.463240] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.113 [2024-11-16 16:42:14.463276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.113 [2024-11-16 16:42:14.463304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.113 [2024-11-16 16:42:14.466959] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.113 [2024-11-16 16:42:14.466993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.114 [2024-11-16 16:42:14.467021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.114 [2024-11-16 16:42:14.470783] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.114 [2024-11-16 16:42:14.470818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.114 [2024-11-16 16:42:14.470846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.114 [2024-11-16 16:42:14.474342] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.114 [2024-11-16 16:42:14.474377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.114 [2024-11-16 16:42:14.474404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.114 [2024-11-16 16:42:14.477616] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.114 [2024-11-16 16:42:14.477650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.114 [2024-11-16 16:42:14.477677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.114 [2024-11-16 16:42:14.481648] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.114 [2024-11-16 16:42:14.481683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.114 [2024-11-16 16:42:14.481710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.114 [2024-11-16 16:42:14.485354] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.114 [2024-11-16 16:42:14.485393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.114 [2024-11-16 16:42:14.485407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.114 [2024-11-16 16:42:14.489143] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.114 [2024-11-16 16:42:14.489200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.114 [2024-11-16 16:42:14.489229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.114 [2024-11-16 16:42:14.492161] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.114 [2024-11-16 16:42:14.492194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.114 [2024-11-16 16:42:14.492222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.114 [2024-11-16 16:42:14.495882] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.114 [2024-11-16 16:42:14.495916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.114 [2024-11-16 16:42:14.495944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.114 [2024-11-16 16:42:14.499430] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.114 [2024-11-16 16:42:14.499465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.114 [2024-11-16 16:42:14.499492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.114 [2024-11-16 16:42:14.502885] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.114 [2024-11-16 16:42:14.502919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.114 [2024-11-16 16:42:14.502947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.114 [2024-11-16 16:42:14.506791] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.114 [2024-11-16 16:42:14.506825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.114 [2024-11-16 16:42:14.506853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.114 [2024-11-16 16:42:14.510814] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.114 [2024-11-16 16:42:14.510848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.114 [2024-11-16 16:42:14.510875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.114 [2024-11-16 16:42:14.514453] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.114 [2024-11-16 16:42:14.514488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.114 [2024-11-16 16:42:14.514516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.114 [2024-11-16 16:42:14.518079] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.114 [2024-11-16 16:42:14.518124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.114 [2024-11-16 16:42:14.518151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.114 [2024-11-16 16:42:14.522001] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.114 [2024-11-16 16:42:14.522036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.114 [2024-11-16 16:42:14.522063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.114 [2024-11-16 16:42:14.524985] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.114 [2024-11-16 16:42:14.525034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.114 [2024-11-16 16:42:14.525078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.114 [2024-11-16 16:42:14.528679] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.114 [2024-11-16 16:42:14.528729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.114 [2024-11-16 16:42:14.528756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.114 [2024-11-16 16:42:14.532841] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.114 [2024-11-16 16:42:14.532893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.114 [2024-11-16 16:42:14.532906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.114 [2024-11-16 16:42:14.537400] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.114 [2024-11-16 16:42:14.537483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.114 [2024-11-16 16:42:14.537510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.114 [2024-11-16 16:42:14.541637] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.114 [2024-11-16 16:42:14.541675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.114 [2024-11-16 16:42:14.541688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.114 [2024-11-16 16:42:14.546085] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.114 [2024-11-16 16:42:14.546133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.114 [2024-11-16 16:42:14.546148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.114 [2024-11-16 16:42:14.550393] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.114 [2024-11-16 16:42:14.550461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.114 [2024-11-16 16:42:14.550489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.114 [2024-11-16 16:42:14.553929] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.114 [2024-11-16 16:42:14.553963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.114 [2024-11-16 16:42:14.553991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.114 [2024-11-16 16:42:14.558110] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.114 [2024-11-16 16:42:14.558154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.114 [2024-11-16 16:42:14.558167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.114 [2024-11-16 16:42:14.561235] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.114 [2024-11-16 16:42:14.561273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.114 [2024-11-16 16:42:14.561286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.114 [2024-11-16 16:42:14.565128] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.114 [2024-11-16 16:42:14.565176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.114 [2024-11-16 16:42:14.565212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.114 [2024-11-16 16:42:14.569135] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.114 [2024-11-16 16:42:14.569170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.115 [2024-11-16 16:42:14.569235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.115 [2024-11-16 16:42:14.573112] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.115 [2024-11-16 16:42:14.573147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.115 [2024-11-16 16:42:14.573175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.115 [2024-11-16 16:42:14.576898] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.115 [2024-11-16 16:42:14.576933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.115 [2024-11-16 16:42:14.576960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.115 [2024-11-16 16:42:14.580524] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.115 [2024-11-16 16:42:14.580559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.115 [2024-11-16 16:42:14.580587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.115 [2024-11-16 16:42:14.583717] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.115 [2024-11-16 16:42:14.583752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.115 [2024-11-16 16:42:14.583779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.115 [2024-11-16 16:42:14.587625] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.115 [2024-11-16 16:42:14.587660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.115 [2024-11-16 16:42:14.587687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.115 [2024-11-16 16:42:14.591536] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.115 [2024-11-16 16:42:14.591570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.115 [2024-11-16 16:42:14.591596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.115 [2024-11-16 16:42:14.594374] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.115 [2024-11-16 16:42:14.594410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.115 [2024-11-16 16:42:14.594437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.115 [2024-11-16 16:42:14.598893] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.115 [2024-11-16 16:42:14.598948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.115 [2024-11-16 16:42:14.598978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.375 [2024-11-16 16:42:14.603118] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.375 [2024-11-16 16:42:14.603155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.375 [2024-11-16 16:42:14.603183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.375 [2024-11-16 16:42:14.606548] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.375 [2024-11-16 16:42:14.606585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.375 [2024-11-16 16:42:14.606612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.375 [2024-11-16 16:42:14.610501] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.375 [2024-11-16 16:42:14.610538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.375 [2024-11-16 16:42:14.610566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.375 [2024-11-16 16:42:14.614185] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.375 [2024-11-16 16:42:14.614220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.375 [2024-11-16 16:42:14.614247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.375 [2024-11-16 16:42:14.617932] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.375 [2024-11-16 16:42:14.617967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.375 [2024-11-16 16:42:14.617995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.375 [2024-11-16 16:42:14.621637] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.375 [2024-11-16 16:42:14.621689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.375 [2024-11-16 16:42:14.621717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.375 [2024-11-16 16:42:14.625267] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.375 [2024-11-16 16:42:14.625319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.375 [2024-11-16 16:42:14.625332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.375 [2024-11-16 16:42:14.628549] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.375 [2024-11-16 16:42:14.628584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.375 [2024-11-16 16:42:14.628611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.375 [2024-11-16 16:42:14.632634] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.375 [2024-11-16 16:42:14.632670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.375 [2024-11-16 16:42:14.632698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.375 [2024-11-16 16:42:14.635943] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.375 [2024-11-16 16:42:14.635977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.375 [2024-11-16 16:42:14.636005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.375 [2024-11-16 16:42:14.639547] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.375 [2024-11-16 16:42:14.639582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.375 [2024-11-16 16:42:14.639609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.375 [2024-11-16 16:42:14.642570] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.375 [2024-11-16 16:42:14.642605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.375 [2024-11-16 16:42:14.642634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.375 [2024-11-16 16:42:14.646926] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.375 [2024-11-16 16:42:14.646961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.375 [2024-11-16 16:42:14.646988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.375 [2024-11-16 16:42:14.650596] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.375 [2024-11-16 16:42:14.650630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.375 [2024-11-16 16:42:14.650658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.375 [2024-11-16 16:42:14.654078] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.375 [2024-11-16 16:42:14.654111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.376 [2024-11-16 16:42:14.654139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.376 [2024-11-16 16:42:14.657539] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.376 [2024-11-16 16:42:14.657588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.376 [2024-11-16 16:42:14.657631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.376 [2024-11-16 16:42:14.661577] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.376 [2024-11-16 16:42:14.661641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.376 [2024-11-16 16:42:14.661668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.376 [2024-11-16 16:42:14.665528] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.376 [2024-11-16 16:42:14.665580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.376 [2024-11-16 16:42:14.665622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.376 [2024-11-16 16:42:14.669366] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.376 [2024-11-16 16:42:14.669417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.376 [2024-11-16 16:42:14.669429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.376 [2024-11-16 16:42:14.672687] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.376 [2024-11-16 16:42:14.672722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.376 [2024-11-16 16:42:14.672749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.376 [2024-11-16 16:42:14.676478] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.376 [2024-11-16 16:42:14.676513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.376 [2024-11-16 16:42:14.676540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.376 [2024-11-16 16:42:14.679958] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.376 [2024-11-16 16:42:14.680009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.376 [2024-11-16 16:42:14.680052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.376 [2024-11-16 16:42:14.684192] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.376 [2024-11-16 16:42:14.684243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.376 [2024-11-16 16:42:14.684272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.376 [2024-11-16 16:42:14.688643] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.376 [2024-11-16 16:42:14.688680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.376 [2024-11-16 16:42:14.688693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.376 [2024-11-16 16:42:14.692746] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.376 [2024-11-16 16:42:14.692796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.376 [2024-11-16 16:42:14.692823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.376 [2024-11-16 16:42:14.697363] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.376 [2024-11-16 16:42:14.697402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.376 [2024-11-16 16:42:14.697416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.376 [2024-11-16 16:42:14.701429] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.376 [2024-11-16 16:42:14.701510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.376 [2024-11-16 16:42:14.701538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.376 [2024-11-16 16:42:14.704103] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.376 [2024-11-16 16:42:14.704151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.376 [2024-11-16 16:42:14.704164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.376 [2024-11-16 16:42:14.708185] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.376 [2024-11-16 16:42:14.708236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.376 [2024-11-16 16:42:14.708249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.376 [2024-11-16 16:42:14.711904] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.376 [2024-11-16 16:42:14.711954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.376 [2024-11-16 16:42:14.711982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.376 [2024-11-16 16:42:14.715553] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.376 [2024-11-16 16:42:14.715603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.376 [2024-11-16 16:42:14.715631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.376 [2024-11-16 16:42:14.719439] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.376 [2024-11-16 16:42:14.719477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.376 [2024-11-16 16:42:14.719491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.376 [2024-11-16 16:42:14.723320] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.376 [2024-11-16 16:42:14.723369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.376 [2024-11-16 16:42:14.723397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.376 [2024-11-16 16:42:14.727183] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.376 [2024-11-16 16:42:14.727236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.376 [2024-11-16 16:42:14.727265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.376 [2024-11-16 16:42:14.730922] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.376 [2024-11-16 16:42:14.730972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.376 [2024-11-16 16:42:14.731000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.376 [2024-11-16 16:42:14.734744] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.376 [2024-11-16 16:42:14.734784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.376 [2024-11-16 16:42:14.734797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.376 [2024-11-16 16:42:14.738768] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.376 [2024-11-16 16:42:14.738806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.376 [2024-11-16 16:42:14.738834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.376 [2024-11-16 16:42:14.742803] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.376 [2024-11-16 16:42:14.742854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.376 [2024-11-16 16:42:14.742883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.376 [2024-11-16 16:42:14.746555] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.376 [2024-11-16 16:42:14.746605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.376 [2024-11-16 16:42:14.746632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.376 [2024-11-16 16:42:14.750759] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.376 [2024-11-16 16:42:14.750810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.376 [2024-11-16 16:42:14.750823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.376 [2024-11-16 16:42:14.754558] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.376 [2024-11-16 16:42:14.754608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.376 [2024-11-16 16:42:14.754636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.376 [2024-11-16 16:42:14.757939] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.376 [2024-11-16 16:42:14.757989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.377 [2024-11-16 16:42:14.758018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.377 [2024-11-16 16:42:14.761842] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.377 [2024-11-16 16:42:14.761890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.377 [2024-11-16 16:42:14.761918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.377 [2024-11-16 16:42:14.765703] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.377 [2024-11-16 16:42:14.765752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.377 [2024-11-16 16:42:14.765781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.377 [2024-11-16 16:42:14.769484] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.377 [2024-11-16 16:42:14.769553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.377 [2024-11-16 16:42:14.769581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.377 [2024-11-16 16:42:14.772883] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.377 [2024-11-16 16:42:14.772932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.377 [2024-11-16 16:42:14.772960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.377 [2024-11-16 16:42:14.775986] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.377 [2024-11-16 16:42:14.776035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.377 [2024-11-16 16:42:14.776063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.377 [2024-11-16 16:42:14.780047] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.377 [2024-11-16 16:42:14.780124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.377 [2024-11-16 16:42:14.780136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.377 [2024-11-16 16:42:14.783746] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.377 [2024-11-16 16:42:14.783796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.377 [2024-11-16 16:42:14.783824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.377 [2024-11-16 16:42:14.787551] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.377 [2024-11-16 16:42:14.787601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.377 [2024-11-16 16:42:14.787629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.377 [2024-11-16 16:42:14.791403] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.377 [2024-11-16 16:42:14.791452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.377 [2024-11-16 16:42:14.791481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.377 [2024-11-16 16:42:14.795740] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.377 [2024-11-16 16:42:14.795789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.377 [2024-11-16 16:42:14.795817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.377 [2024-11-16 16:42:14.799662] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.377 [2024-11-16 16:42:14.799711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.377 [2024-11-16 16:42:14.799739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.377 [2024-11-16 16:42:14.803953] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.377 [2024-11-16 16:42:14.804003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.377 [2024-11-16 16:42:14.804031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.377 [2024-11-16 16:42:14.807788] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.377 [2024-11-16 16:42:14.807850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.377 [2024-11-16 16:42:14.807862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.377 [2024-11-16 16:42:14.811323] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.377 [2024-11-16 16:42:14.811373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.377 [2024-11-16 16:42:14.811401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.377 [2024-11-16 16:42:14.815045] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.377 [2024-11-16 16:42:14.815107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.377 [2024-11-16 16:42:14.815136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.377 [2024-11-16 16:42:14.819000] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.377 [2024-11-16 16:42:14.819051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.377 [2024-11-16 16:42:14.819089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.377 [2024-11-16 16:42:14.822719] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.377 [2024-11-16 16:42:14.822753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.377 [2024-11-16 16:42:14.822765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.377 [2024-11-16 16:42:14.826557] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.377 [2024-11-16 16:42:14.826608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.377 [2024-11-16 16:42:14.826637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.377 [2024-11-16 16:42:14.830596] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.377 [2024-11-16 16:42:14.830646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.377 [2024-11-16 16:42:14.830674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.377 [2024-11-16 16:42:14.834229] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.377 [2024-11-16 16:42:14.834279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.377 [2024-11-16 16:42:14.834307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.377 [2024-11-16 16:42:14.837880] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.377 [2024-11-16 16:42:14.837930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.377 [2024-11-16 16:42:14.837959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.377 [2024-11-16 16:42:14.841889] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.377 [2024-11-16 16:42:14.841940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.377 [2024-11-16 16:42:14.841968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.377 [2024-11-16 16:42:14.846007] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.377 [2024-11-16 16:42:14.846084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.377 [2024-11-16 16:42:14.846114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.377 [2024-11-16 16:42:14.849343] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.377 [2024-11-16 16:42:14.849382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.377 [2024-11-16 16:42:14.849409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.377 [2024-11-16 16:42:14.852747] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.377 [2024-11-16 16:42:14.852797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.377 [2024-11-16 16:42:14.852825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.377 [2024-11-16 16:42:14.856707] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.377 [2024-11-16 16:42:14.856740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.377 [2024-11-16 16:42:14.856768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.377 [2024-11-16 16:42:14.860161] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.377 [2024-11-16 16:42:14.860213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.378 [2024-11-16 16:42:14.860241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.638 [2024-11-16 16:42:14.864179] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.638 [2024-11-16 16:42:14.864247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.638 [2024-11-16 16:42:14.864260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.638 [2024-11-16 16:42:14.867938] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.638 [2024-11-16 16:42:14.867974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.638 [2024-11-16 16:42:14.868002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.638 [2024-11-16 16:42:14.872183] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.638 [2024-11-16 16:42:14.872219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.638 [2024-11-16 16:42:14.872247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.638 [2024-11-16 16:42:14.876538] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.638 [2024-11-16 16:42:14.876572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.638 [2024-11-16 16:42:14.876600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.638 [2024-11-16 16:42:14.879735] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.638 [2024-11-16 16:42:14.879770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.638 [2024-11-16 16:42:14.879797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.638 [2024-11-16 16:42:14.883238] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.638 [2024-11-16 16:42:14.883272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.638 [2024-11-16 16:42:14.883300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.638 [2024-11-16 16:42:14.887381] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.638 [2024-11-16 16:42:14.887413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.638 [2024-11-16 16:42:14.887441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.638 [2024-11-16 16:42:14.891046] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.638 [2024-11-16 16:42:14.891089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.638 [2024-11-16 16:42:14.891117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.638 [2024-11-16 16:42:14.894602] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.638 [2024-11-16 16:42:14.894635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.638 [2024-11-16 16:42:14.894663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.638 [2024-11-16 16:42:14.898423] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.638 [2024-11-16 16:42:14.898456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.638 [2024-11-16 16:42:14.898484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.638 [2024-11-16 16:42:14.902256] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.638 [2024-11-16 16:42:14.902290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.638 [2024-11-16 16:42:14.902318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.638 [2024-11-16 16:42:14.906041] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.638 [2024-11-16 16:42:14.906085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.638 [2024-11-16 16:42:14.906113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.638 [2024-11-16 16:42:14.909818] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.638 [2024-11-16 16:42:14.909851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.638 [2024-11-16 16:42:14.909879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.638 [2024-11-16 16:42:14.913281] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.638 [2024-11-16 16:42:14.913317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.638 [2024-11-16 16:42:14.913345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.638 [2024-11-16 16:42:14.917064] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.638 [2024-11-16 16:42:14.917095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.638 [2024-11-16 16:42:14.917122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.638 [2024-11-16 16:42:14.920329] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.638 [2024-11-16 16:42:14.920363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.638 [2024-11-16 16:42:14.920390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.638 [2024-11-16 16:42:14.924349] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.638 [2024-11-16 16:42:14.924398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.638 [2024-11-16 16:42:14.924411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.638 [2024-11-16 16:42:14.928697] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.638 [2024-11-16 16:42:14.928731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.639 [2024-11-16 16:42:14.928758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.639 [2024-11-16 16:42:14.932037] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.639 [2024-11-16 16:42:14.932082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.639 [2024-11-16 16:42:14.932110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.639 [2024-11-16 16:42:14.936021] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.639 [2024-11-16 16:42:14.936082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.639 [2024-11-16 16:42:14.936096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.639 [2024-11-16 16:42:14.939706] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.639 [2024-11-16 16:42:14.939740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.639 [2024-11-16 16:42:14.939767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.639 [2024-11-16 16:42:14.943320] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.639 [2024-11-16 16:42:14.943354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.639 [2024-11-16 16:42:14.943380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.639 [2024-11-16 16:42:14.946920] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.639 [2024-11-16 16:42:14.946954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.639 [2024-11-16 16:42:14.946982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.639 [2024-11-16 16:42:14.950647] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.639 [2024-11-16 16:42:14.950680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.639 [2024-11-16 16:42:14.950708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.639 [2024-11-16 16:42:14.954317] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.639 [2024-11-16 16:42:14.954351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.639 [2024-11-16 16:42:14.954378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.639 [2024-11-16 16:42:14.958086] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.639 [2024-11-16 16:42:14.958130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.639 [2024-11-16 16:42:14.958158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.639 [2024-11-16 16:42:14.961722] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.639 [2024-11-16 16:42:14.961757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.639 [2024-11-16 16:42:14.961784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.639 [2024-11-16 16:42:14.965707] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.639 [2024-11-16 16:42:14.965741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.639 [2024-11-16 16:42:14.965769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.639 [2024-11-16 16:42:14.969493] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.639 [2024-11-16 16:42:14.969542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.639 [2024-11-16 16:42:14.969570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.639 [2024-11-16 16:42:14.973291] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.639 [2024-11-16 16:42:14.973340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.639 [2024-11-16 16:42:14.973368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.639 [2024-11-16 16:42:14.976749] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.639 [2024-11-16 16:42:14.976782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.639 [2024-11-16 16:42:14.976810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.639 [2024-11-16 16:42:14.980211] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.639 [2024-11-16 16:42:14.980244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.639 [2024-11-16 16:42:14.980272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.639 [2024-11-16 16:42:14.983559] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.639 [2024-11-16 16:42:14.983593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.639 [2024-11-16 16:42:14.983620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.639 [2024-11-16 16:42:14.987395] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.639 [2024-11-16 16:42:14.987428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.639 [2024-11-16 16:42:14.987455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.639 [2024-11-16 16:42:14.990821] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.639 [2024-11-16 16:42:14.990854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.639 [2024-11-16 16:42:14.990882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.639 [2024-11-16 16:42:14.994213] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.639 [2024-11-16 16:42:14.994262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.639 [2024-11-16 16:42:14.994274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.639 [2024-11-16 16:42:14.997778] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.639 [2024-11-16 16:42:14.997811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.639 [2024-11-16 16:42:14.997838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.639 [2024-11-16 16:42:15.001118] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.639 [2024-11-16 16:42:15.001151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.639 [2024-11-16 16:42:15.001185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.639 [2024-11-16 16:42:15.005097] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.639 [2024-11-16 16:42:15.005130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.639 [2024-11-16 16:42:15.005158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.639 [2024-11-16 16:42:15.008286] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.639 [2024-11-16 16:42:15.008320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.639 [2024-11-16 16:42:15.008346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.639 [2024-11-16 16:42:15.011756] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.639 [2024-11-16 16:42:15.011791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.639 [2024-11-16 16:42:15.011819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.639 [2024-11-16 16:42:15.015129] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.639 [2024-11-16 16:42:15.015162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.639 [2024-11-16 16:42:15.015189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.639 [2024-11-16 16:42:15.019091] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.639 [2024-11-16 16:42:15.019122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.639 [2024-11-16 16:42:15.019149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.639 [2024-11-16 16:42:15.022834] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.639 [2024-11-16 16:42:15.022867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.639 [2024-11-16 16:42:15.022894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.639 [2024-11-16 16:42:15.026621] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.639 [2024-11-16 16:42:15.026654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.640 [2024-11-16 16:42:15.026681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.640 [2024-11-16 16:42:15.029991] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.640 [2024-11-16 16:42:15.030025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.640 [2024-11-16 16:42:15.030052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.640 [2024-11-16 16:42:15.033471] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.640 [2024-11-16 16:42:15.033535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.640 [2024-11-16 16:42:15.033562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.640 [2024-11-16 16:42:15.037788] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.640 [2024-11-16 16:42:15.037822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.640 [2024-11-16 16:42:15.037850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.640 [2024-11-16 16:42:15.041609] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.640 [2024-11-16 16:42:15.041641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.640 [2024-11-16 16:42:15.041668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.640 [2024-11-16 16:42:15.045092] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.640 [2024-11-16 16:42:15.045125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.640 [2024-11-16 16:42:15.045153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.640 [2024-11-16 16:42:15.048869] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.640 [2024-11-16 16:42:15.048902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.640 [2024-11-16 16:42:15.048929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.640 [2024-11-16 16:42:15.052767] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.640 [2024-11-16 16:42:15.052800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.640 [2024-11-16 16:42:15.052828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.640 [2024-11-16 16:42:15.056620] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.640 [2024-11-16 16:42:15.056654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.640 [2024-11-16 16:42:15.056681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.640 [2024-11-16 16:42:15.060187] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.640 [2024-11-16 16:42:15.060221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.640 [2024-11-16 16:42:15.060248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.640 [2024-11-16 16:42:15.063654] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.640 [2024-11-16 16:42:15.063687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.640 [2024-11-16 16:42:15.063713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.640 [2024-11-16 16:42:15.067982] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.640 [2024-11-16 16:42:15.068016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.640 [2024-11-16 16:42:15.068044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.640 [2024-11-16 16:42:15.071545] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.640 [2024-11-16 16:42:15.071579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.640 [2024-11-16 16:42:15.071606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.640 [2024-11-16 16:42:15.075369] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.640 [2024-11-16 16:42:15.075418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.640 [2024-11-16 16:42:15.075430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.640 [2024-11-16 16:42:15.079203] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.640 [2024-11-16 16:42:15.079236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.640 [2024-11-16 16:42:15.079264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.640 [2024-11-16 16:42:15.082751] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.640 [2024-11-16 16:42:15.082786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.640 [2024-11-16 16:42:15.082813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.640 [2024-11-16 16:42:15.086203] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.640 [2024-11-16 16:42:15.086237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.640 [2024-11-16 16:42:15.086265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.640 [2024-11-16 16:42:15.089580] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.640 [2024-11-16 16:42:15.089613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.640 [2024-11-16 16:42:15.089639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.640 [2024-11-16 16:42:15.093564] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.640 [2024-11-16 16:42:15.093630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.640 [2024-11-16 16:42:15.093657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.640 [2024-11-16 16:42:15.097140] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.640 [2024-11-16 16:42:15.097174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.640 [2024-11-16 16:42:15.097225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.640 [2024-11-16 16:42:15.100521] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.640 [2024-11-16 16:42:15.100555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.640 [2024-11-16 16:42:15.100583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.640 [2024-11-16 16:42:15.104005] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.640 [2024-11-16 16:42:15.104041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.640 [2024-11-16 16:42:15.104081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.640 [2024-11-16 16:42:15.107486] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.640 [2024-11-16 16:42:15.107521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.640 [2024-11-16 16:42:15.107549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.640 [2024-11-16 16:42:15.110855] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.640 [2024-11-16 16:42:15.110888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.640 [2024-11-16 16:42:15.110916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.640 [2024-11-16 16:42:15.114784] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.640 [2024-11-16 16:42:15.114819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.640 [2024-11-16 16:42:15.114847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.640 [2024-11-16 16:42:15.118240] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.640 [2024-11-16 16:42:15.118273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.640 [2024-11-16 16:42:15.118301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.640 [2024-11-16 16:42:15.122422] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.640 [2024-11-16 16:42:15.122475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.640 [2024-11-16 16:42:15.122505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.901 [2024-11-16 16:42:15.126381] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.901 [2024-11-16 16:42:15.126436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.901 [2024-11-16 16:42:15.126464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.901 [2024-11-16 16:42:15.130765] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.901 [2024-11-16 16:42:15.130816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.901 [2024-11-16 16:42:15.130844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.901 [2024-11-16 16:42:15.135008] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.901 [2024-11-16 16:42:15.135045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.901 [2024-11-16 16:42:15.135083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.901 [2024-11-16 16:42:15.138504] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.901 [2024-11-16 16:42:15.138538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.901 [2024-11-16 16:42:15.138565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.901 [2024-11-16 16:42:15.142168] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.901 [2024-11-16 16:42:15.142203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.901 [2024-11-16 16:42:15.142230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.901 [2024-11-16 16:42:15.145356] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.901 [2024-11-16 16:42:15.145410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.901 [2024-11-16 16:42:15.145423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.901 [2024-11-16 16:42:15.149110] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.901 [2024-11-16 16:42:15.149144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.901 [2024-11-16 16:42:15.149172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.901 [2024-11-16 16:42:15.152702] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.901 [2024-11-16 16:42:15.152737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.901 [2024-11-16 16:42:15.152765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.901 [2024-11-16 16:42:15.156225] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.901 [2024-11-16 16:42:15.156259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.901 [2024-11-16 16:42:15.156286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.901 [2024-11-16 16:42:15.159773] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.901 [2024-11-16 16:42:15.159809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.901 [2024-11-16 16:42:15.159838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.901 [2024-11-16 16:42:15.163704] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.901 [2024-11-16 16:42:15.163739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.901 [2024-11-16 16:42:15.163766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.901 [2024-11-16 16:42:15.167880] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.901 [2024-11-16 16:42:15.167915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.901 [2024-11-16 16:42:15.167943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.901 [2024-11-16 16:42:15.170771] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.901 [2024-11-16 16:42:15.170805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.901 [2024-11-16 16:42:15.170832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.901 [2024-11-16 16:42:15.174053] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.901 [2024-11-16 16:42:15.174110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.901 [2024-11-16 16:42:15.174122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.901 [2024-11-16 16:42:15.177703] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.901 [2024-11-16 16:42:15.177750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.901 [2024-11-16 16:42:15.177778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.901 [2024-11-16 16:42:15.181864] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.901 [2024-11-16 16:42:15.181897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.901 [2024-11-16 16:42:15.181925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.901 [2024-11-16 16:42:15.185665] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.901 [2024-11-16 16:42:15.185698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.901 [2024-11-16 16:42:15.185724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.901 [2024-11-16 16:42:15.189164] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.901 [2024-11-16 16:42:15.189245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.901 [2024-11-16 16:42:15.189258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.901 [2024-11-16 16:42:15.193652] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.901 [2024-11-16 16:42:15.193683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.901 [2024-11-16 16:42:15.193711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.901 [2024-11-16 16:42:15.197718] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.901 [2024-11-16 16:42:15.197753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.901 [2024-11-16 16:42:15.197781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.901 [2024-11-16 16:42:15.201577] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.901 [2024-11-16 16:42:15.201626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.902 [2024-11-16 16:42:15.201653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.902 [2024-11-16 16:42:15.205714] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.902 [2024-11-16 16:42:15.205749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.902 [2024-11-16 16:42:15.205776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.902 [2024-11-16 16:42:15.209156] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.902 [2024-11-16 16:42:15.209216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.902 [2024-11-16 16:42:15.209231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.902 [2024-11-16 16:42:15.212897] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.902 [2024-11-16 16:42:15.212931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.902 [2024-11-16 16:42:15.212958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.902 [2024-11-16 16:42:15.216447] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.902 [2024-11-16 16:42:15.216481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.902 [2024-11-16 16:42:15.216508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.902 [2024-11-16 16:42:15.220562] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.902 [2024-11-16 16:42:15.220597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.902 [2024-11-16 16:42:15.220625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.902 [2024-11-16 16:42:15.224404] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.902 [2024-11-16 16:42:15.224437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.902 [2024-11-16 16:42:15.224464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.902 [2024-11-16 16:42:15.227695] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.902 [2024-11-16 16:42:15.227729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.902 [2024-11-16 16:42:15.227757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.902 [2024-11-16 16:42:15.230723] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.902 [2024-11-16 16:42:15.230757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.902 [2024-11-16 16:42:15.230785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.902 [2024-11-16 16:42:15.234570] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.902 [2024-11-16 16:42:15.234603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.902 [2024-11-16 16:42:15.234630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.902 [2024-11-16 16:42:15.238295] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.902 [2024-11-16 16:42:15.238344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.902 [2024-11-16 16:42:15.238355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.902 [2024-11-16 16:42:15.242031] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.902 [2024-11-16 16:42:15.242075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.902 [2024-11-16 16:42:15.242103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.902 [2024-11-16 16:42:15.245398] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.902 [2024-11-16 16:42:15.245451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.902 [2024-11-16 16:42:15.245480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.902 [2024-11-16 16:42:15.248875] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.902 [2024-11-16 16:42:15.248924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.902 [2024-11-16 16:42:15.248951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.902 [2024-11-16 16:42:15.252766] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.902 [2024-11-16 16:42:15.252801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.902 [2024-11-16 16:42:15.252829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.902 [2024-11-16 16:42:15.256196] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.902 [2024-11-16 16:42:15.256231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.902 [2024-11-16 16:42:15.256258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.902 [2024-11-16 16:42:15.259710] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.902 [2024-11-16 16:42:15.259744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.902 [2024-11-16 16:42:15.259771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.902 [2024-11-16 16:42:15.263364] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.902 [2024-11-16 16:42:15.263400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.902 [2024-11-16 16:42:15.263427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.902 [2024-11-16 16:42:15.266875] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.902 [2024-11-16 16:42:15.266909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.902 [2024-11-16 16:42:15.266936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.902 [2024-11-16 16:42:15.270735] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.902 [2024-11-16 16:42:15.270768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.902 [2024-11-16 16:42:15.270795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.902 [2024-11-16 16:42:15.274588] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.902 [2024-11-16 16:42:15.274622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.902 [2024-11-16 16:42:15.274650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.902 [2024-11-16 16:42:15.278406] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.902 [2024-11-16 16:42:15.278439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.902 [2024-11-16 16:42:15.278466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.902 [2024-11-16 16:42:15.282208] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.902 [2024-11-16 16:42:15.282241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.902 [2024-11-16 16:42:15.282269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.902 [2024-11-16 16:42:15.285899] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.902 [2024-11-16 16:42:15.285933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.902 [2024-11-16 16:42:15.285960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.902 [2024-11-16 16:42:15.289839] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.902 [2024-11-16 16:42:15.289872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.902 [2024-11-16 16:42:15.289899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.902 [2024-11-16 16:42:15.293326] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.902 [2024-11-16 16:42:15.293377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.902 [2024-11-16 16:42:15.293405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.902 [2024-11-16 16:42:15.297037] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.902 [2024-11-16 16:42:15.297079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.902 [2024-11-16 16:42:15.297107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.902 [2024-11-16 16:42:15.300644] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.902 [2024-11-16 16:42:15.300677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.903 [2024-11-16 16:42:15.300705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.903 [2024-11-16 16:42:15.304541] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.903 [2024-11-16 16:42:15.304576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.903 [2024-11-16 16:42:15.304604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.903 [2024-11-16 16:42:15.308546] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.903 [2024-11-16 16:42:15.308580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.903 [2024-11-16 16:42:15.308608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.903 [2024-11-16 16:42:15.312131] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.903 [2024-11-16 16:42:15.312164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.903 [2024-11-16 16:42:15.312191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.903 [2024-11-16 16:42:15.315193] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.903 [2024-11-16 16:42:15.315226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.903 [2024-11-16 16:42:15.315253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.903 [2024-11-16 16:42:15.318541] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.903 [2024-11-16 16:42:15.318575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.903 [2024-11-16 16:42:15.318603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.903 [2024-11-16 16:42:15.322158] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.903 [2024-11-16 16:42:15.322209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.903 [2024-11-16 16:42:15.322221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.903 [2024-11-16 16:42:15.326113] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.903 [2024-11-16 16:42:15.326147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.903 [2024-11-16 16:42:15.326174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.903 [2024-11-16 16:42:15.329462] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.903 [2024-11-16 16:42:15.329499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.903 [2024-11-16 16:42:15.329542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.903 [2024-11-16 16:42:15.333611] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.903 [2024-11-16 16:42:15.333645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.903 [2024-11-16 16:42:15.333672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.903 [2024-11-16 16:42:15.337128] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.903 [2024-11-16 16:42:15.337161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.903 [2024-11-16 16:42:15.337218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.903 [2024-11-16 16:42:15.341016] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.903 [2024-11-16 16:42:15.341050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.903 [2024-11-16 16:42:15.341088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.903 [2024-11-16 16:42:15.344548] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.903 [2024-11-16 16:42:15.344583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.903 [2024-11-16 16:42:15.344611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.903 [2024-11-16 16:42:15.348839] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.903 [2024-11-16 16:42:15.348873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.903 [2024-11-16 16:42:15.348901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.903 [2024-11-16 16:42:15.352650] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.903 [2024-11-16 16:42:15.352684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.903 [2024-11-16 16:42:15.352711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.903 [2024-11-16 16:42:15.356299] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.903 [2024-11-16 16:42:15.356333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.903 [2024-11-16 16:42:15.356361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.903 [2024-11-16 16:42:15.359950] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.903 [2024-11-16 16:42:15.359985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.903 [2024-11-16 16:42:15.360013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.903 [2024-11-16 16:42:15.362800] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.903 [2024-11-16 16:42:15.362834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.903 [2024-11-16 16:42:15.362861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.903 [2024-11-16 16:42:15.366235] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.903 [2024-11-16 16:42:15.366295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.903 [2024-11-16 16:42:15.366324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.903 [2024-11-16 16:42:15.370158] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.903 [2024-11-16 16:42:15.370191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.903 [2024-11-16 16:42:15.370219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.903 [2024-11-16 16:42:15.373462] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.903 [2024-11-16 16:42:15.373512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.903 [2024-11-16 16:42:15.373540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.903 [2024-11-16 16:42:15.377280] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.903 [2024-11-16 16:42:15.377316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.903 [2024-11-16 16:42:15.377343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.903 [2024-11-16 16:42:15.380805] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.903 [2024-11-16 16:42:15.380840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.903 [2024-11-16 16:42:15.380867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.903 [2024-11-16 16:42:15.384817] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:37.903 [2024-11-16 16:42:15.384886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.903 [2024-11-16 16:42:15.384914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.164 [2024-11-16 16:42:15.388738] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.164 [2024-11-16 16:42:15.388793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.164 [2024-11-16 16:42:15.388822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.164 [2024-11-16 16:42:15.392454] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.164 [2024-11-16 16:42:15.392488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.164 [2024-11-16 16:42:15.392516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.164 [2024-11-16 16:42:15.396158] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.164 [2024-11-16 16:42:15.396210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.164 [2024-11-16 16:42:15.396239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.164 [2024-11-16 16:42:15.400327] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.164 [2024-11-16 16:42:15.400361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.164 [2024-11-16 16:42:15.400388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.164 [2024-11-16 16:42:15.404099] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.164 [2024-11-16 16:42:15.404131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.164 [2024-11-16 16:42:15.404159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.164 [2024-11-16 16:42:15.407628] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.164 [2024-11-16 16:42:15.407663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.164 [2024-11-16 16:42:15.407691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.164 [2024-11-16 16:42:15.411349] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.164 [2024-11-16 16:42:15.411384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.164 [2024-11-16 16:42:15.411412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.164 [2024-11-16 16:42:15.415268] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.164 [2024-11-16 16:42:15.415303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.164 [2024-11-16 16:42:15.415331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.164 [2024-11-16 16:42:15.418842] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.164 [2024-11-16 16:42:15.418876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.164 [2024-11-16 16:42:15.418904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.164 [2024-11-16 16:42:15.422538] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.164 [2024-11-16 16:42:15.422571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.164 [2024-11-16 16:42:15.422598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.164 [2024-11-16 16:42:15.426314] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.164 [2024-11-16 16:42:15.426347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.164 [2024-11-16 16:42:15.426374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.164 [2024-11-16 16:42:15.429866] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.164 [2024-11-16 16:42:15.429900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.164 [2024-11-16 16:42:15.429927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.164 [2024-11-16 16:42:15.433433] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.164 [2024-11-16 16:42:15.433483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.164 [2024-11-16 16:42:15.433495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.164 [2024-11-16 16:42:15.437768] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.164 [2024-11-16 16:42:15.437801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.164 [2024-11-16 16:42:15.437829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.164 [2024-11-16 16:42:15.441760] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.164 [2024-11-16 16:42:15.441793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.164 [2024-11-16 16:42:15.441821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.164 [2024-11-16 16:42:15.444232] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.164 [2024-11-16 16:42:15.444264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.164 [2024-11-16 16:42:15.444292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.164 [2024-11-16 16:42:15.448082] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.164 [2024-11-16 16:42:15.448116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.164 [2024-11-16 16:42:15.448143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.164 [2024-11-16 16:42:15.451731] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.164 [2024-11-16 16:42:15.451763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.164 [2024-11-16 16:42:15.451791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.164 [2024-11-16 16:42:15.455884] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.165 [2024-11-16 16:42:15.455919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.165 [2024-11-16 16:42:15.455946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.165 [2024-11-16 16:42:15.459832] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.165 [2024-11-16 16:42:15.459866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.165 [2024-11-16 16:42:15.459894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.165 [2024-11-16 16:42:15.463567] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.165 [2024-11-16 16:42:15.463602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.165 [2024-11-16 16:42:15.463630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.165 [2024-11-16 16:42:15.466795] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.165 [2024-11-16 16:42:15.466830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.165 [2024-11-16 16:42:15.466857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.165 [2024-11-16 16:42:15.470644] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.165 [2024-11-16 16:42:15.470678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.165 [2024-11-16 16:42:15.470706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.165 [2024-11-16 16:42:15.474232] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.165 [2024-11-16 16:42:15.474267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.165 [2024-11-16 16:42:15.474294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.165 [2024-11-16 16:42:15.477733] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.165 [2024-11-16 16:42:15.477767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.165 [2024-11-16 16:42:15.477795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.165 [2024-11-16 16:42:15.481798] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.165 [2024-11-16 16:42:15.481863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.165 [2024-11-16 16:42:15.481890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.165 [2024-11-16 16:42:15.485739] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.165 [2024-11-16 16:42:15.485774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.165 [2024-11-16 16:42:15.485801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.165 [2024-11-16 16:42:15.489375] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.165 [2024-11-16 16:42:15.489423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.165 [2024-11-16 16:42:15.489451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.165 [2024-11-16 16:42:15.493356] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.165 [2024-11-16 16:42:15.493405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.165 [2024-11-16 16:42:15.493433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.165 [2024-11-16 16:42:15.497354] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.165 [2024-11-16 16:42:15.497402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.165 [2024-11-16 16:42:15.497430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.165 [2024-11-16 16:42:15.500965] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.165 [2024-11-16 16:42:15.500997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.165 [2024-11-16 16:42:15.501023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.165 [2024-11-16 16:42:15.503650] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.165 [2024-11-16 16:42:15.503684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.165 [2024-11-16 16:42:15.503711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.165 [2024-11-16 16:42:15.507504] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.165 [2024-11-16 16:42:15.507537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.165 [2024-11-16 16:42:15.507565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.165 [2024-11-16 16:42:15.511712] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.165 [2024-11-16 16:42:15.511746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.165 [2024-11-16 16:42:15.511773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.165 [2024-11-16 16:42:15.515254] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.165 [2024-11-16 16:42:15.515288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.165 [2024-11-16 16:42:15.515315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.165 [2024-11-16 16:42:15.519392] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.165 [2024-11-16 16:42:15.519456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.165 [2024-11-16 16:42:15.519483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.165 [2024-11-16 16:42:15.522821] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.165 [2024-11-16 16:42:15.522856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.165 [2024-11-16 16:42:15.522884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.165 [2024-11-16 16:42:15.525878] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.165 [2024-11-16 16:42:15.525911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.165 [2024-11-16 16:42:15.525939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.165 [2024-11-16 16:42:15.529458] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.165 [2024-11-16 16:42:15.529494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.165 [2024-11-16 16:42:15.529523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.165 [2024-11-16 16:42:15.532765] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.165 [2024-11-16 16:42:15.532799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.165 [2024-11-16 16:42:15.532826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.165 [2024-11-16 16:42:15.536216] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.165 [2024-11-16 16:42:15.536249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.165 [2024-11-16 16:42:15.536277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.165 [2024-11-16 16:42:15.539471] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.165 [2024-11-16 16:42:15.539504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.165 [2024-11-16 16:42:15.539532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.166 [2024-11-16 16:42:15.543846] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.166 [2024-11-16 16:42:15.543880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.166 [2024-11-16 16:42:15.543907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.166 [2024-11-16 16:42:15.547608] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.166 [2024-11-16 16:42:15.547659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.166 [2024-11-16 16:42:15.547687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.166 [2024-11-16 16:42:15.551524] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.166 [2024-11-16 16:42:15.551575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.166 [2024-11-16 16:42:15.551603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.166 [2024-11-16 16:42:15.555519] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.166 [2024-11-16 16:42:15.555570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.166 [2024-11-16 16:42:15.555582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.166 [2024-11-16 16:42:15.558845] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.166 [2024-11-16 16:42:15.558896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.166 [2024-11-16 16:42:15.558924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.166 [2024-11-16 16:42:15.563302] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.166 [2024-11-16 16:42:15.563341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.166 [2024-11-16 16:42:15.563354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.166 [2024-11-16 16:42:15.566850] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.166 [2024-11-16 16:42:15.566884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.166 [2024-11-16 16:42:15.566913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.166 [2024-11-16 16:42:15.570647] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.166 [2024-11-16 16:42:15.570682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.166 [2024-11-16 16:42:15.570709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.166 [2024-11-16 16:42:15.575264] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.166 [2024-11-16 16:42:15.575317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.166 [2024-11-16 16:42:15.575346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.166 [2024-11-16 16:42:15.579799] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.166 [2024-11-16 16:42:15.579834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.166 [2024-11-16 16:42:15.579861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.166 [2024-11-16 16:42:15.582997] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.166 [2024-11-16 16:42:15.583032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.166 [2024-11-16 16:42:15.583061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.166 [2024-11-16 16:42:15.586756] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.166 [2024-11-16 16:42:15.586790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.166 [2024-11-16 16:42:15.586817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.166 [2024-11-16 16:42:15.590262] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.166 [2024-11-16 16:42:15.590295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.166 [2024-11-16 16:42:15.590322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.166 [2024-11-16 16:42:15.593728] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.166 [2024-11-16 16:42:15.593761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.166 [2024-11-16 16:42:15.593789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.166 [2024-11-16 16:42:15.597381] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.166 [2024-11-16 16:42:15.597417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.166 [2024-11-16 16:42:15.597429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.166 [2024-11-16 16:42:15.601218] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.166 [2024-11-16 16:42:15.601269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.166 [2024-11-16 16:42:15.601282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.166 [2024-11-16 16:42:15.604782] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.166 [2024-11-16 16:42:15.604815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.166 [2024-11-16 16:42:15.604843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.166 [2024-11-16 16:42:15.608754] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.166 [2024-11-16 16:42:15.608788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.166 [2024-11-16 16:42:15.608816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.166 [2024-11-16 16:42:15.612767] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.166 [2024-11-16 16:42:15.612801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.166 [2024-11-16 16:42:15.612829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.166 [2024-11-16 16:42:15.616484] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.166 [2024-11-16 16:42:15.616519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.166 [2024-11-16 16:42:15.616546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.166 [2024-11-16 16:42:15.620010] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.166 [2024-11-16 16:42:15.620043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.166 [2024-11-16 16:42:15.620081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.166 [2024-11-16 16:42:15.624036] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.166 [2024-11-16 16:42:15.624079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.167 [2024-11-16 16:42:15.624106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.167 [2024-11-16 16:42:15.628173] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.167 [2024-11-16 16:42:15.628206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.167 [2024-11-16 16:42:15.628234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.167 [2024-11-16 16:42:15.632566] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.167 [2024-11-16 16:42:15.632600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.167 [2024-11-16 16:42:15.632627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.167 [2024-11-16 16:42:15.636484] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.167 [2024-11-16 16:42:15.636518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.167 [2024-11-16 16:42:15.636545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.167 [2024-11-16 16:42:15.639904] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.167 [2024-11-16 16:42:15.639939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.167 [2024-11-16 16:42:15.639967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.167 [2024-11-16 16:42:15.643423] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.167 [2024-11-16 16:42:15.643457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.167 [2024-11-16 16:42:15.643485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.167 [2024-11-16 16:42:15.646787] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.167 [2024-11-16 16:42:15.646821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.167 [2024-11-16 16:42:15.646848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.167 [2024-11-16 16:42:15.650674] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.167 [2024-11-16 16:42:15.650712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.167 [2024-11-16 16:42:15.650740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.427 [2024-11-16 16:42:15.654762] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.427 [2024-11-16 16:42:15.654800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.427 [2024-11-16 16:42:15.654829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.427 [2024-11-16 16:42:15.658448] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.427 [2024-11-16 16:42:15.658485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.427 [2024-11-16 16:42:15.658512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.427 [2024-11-16 16:42:15.662272] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.427 [2024-11-16 16:42:15.662324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.427 [2024-11-16 16:42:15.662352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.427 [2024-11-16 16:42:15.666368] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.427 [2024-11-16 16:42:15.666401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.427 [2024-11-16 16:42:15.666429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.427 [2024-11-16 16:42:15.670652] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.427 [2024-11-16 16:42:15.670687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.427 [2024-11-16 16:42:15.670715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.427 [2024-11-16 16:42:15.674206] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.427 [2024-11-16 16:42:15.674239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.427 [2024-11-16 16:42:15.674267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.427 [2024-11-16 16:42:15.677453] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.427 [2024-11-16 16:42:15.677505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.427 [2024-11-16 16:42:15.677533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.427 [2024-11-16 16:42:15.681562] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.427 [2024-11-16 16:42:15.681629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.427 [2024-11-16 16:42:15.681657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.427 [2024-11-16 16:42:15.685434] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.427 [2024-11-16 16:42:15.685485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.427 [2024-11-16 16:42:15.685513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.427 [2024-11-16 16:42:15.688896] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.427 [2024-11-16 16:42:15.688929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.427 [2024-11-16 16:42:15.688956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.427 [2024-11-16 16:42:15.692624] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.427 [2024-11-16 16:42:15.692657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.427 [2024-11-16 16:42:15.692685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.427 [2024-11-16 16:42:15.696205] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.427 [2024-11-16 16:42:15.696239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.427 [2024-11-16 16:42:15.696267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.427 [2024-11-16 16:42:15.699569] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.427 [2024-11-16 16:42:15.699603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.427 [2024-11-16 16:42:15.699631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.427 [2024-11-16 16:42:15.703391] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.427 [2024-11-16 16:42:15.703425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.427 [2024-11-16 16:42:15.703452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.427 [2024-11-16 16:42:15.707496] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.428 [2024-11-16 16:42:15.707530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.428 [2024-11-16 16:42:15.707558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.428 [2024-11-16 16:42:15.711083] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.428 [2024-11-16 16:42:15.711115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.428 [2024-11-16 16:42:15.711143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.428 [2024-11-16 16:42:15.714987] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.428 [2024-11-16 16:42:15.715022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.428 [2024-11-16 16:42:15.715049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.428 [2024-11-16 16:42:15.718160] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.428 [2024-11-16 16:42:15.718193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.428 [2024-11-16 16:42:15.718220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.428 [2024-11-16 16:42:15.721418] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.428 [2024-11-16 16:42:15.721455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.428 [2024-11-16 16:42:15.721483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.428 [2024-11-16 16:42:15.725239] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.428 [2024-11-16 16:42:15.725290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.428 [2024-11-16 16:42:15.725303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.428 [2024-11-16 16:42:15.728774] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.428 [2024-11-16 16:42:15.728807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.428 [2024-11-16 16:42:15.728835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.428 [2024-11-16 16:42:15.732765] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.428 [2024-11-16 16:42:15.732799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.428 [2024-11-16 16:42:15.732827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.428 [2024-11-16 16:42:15.736576] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.428 [2024-11-16 16:42:15.736611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.428 [2024-11-16 16:42:15.736638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.428 [2024-11-16 16:42:15.739985] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.428 [2024-11-16 16:42:15.740020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.428 [2024-11-16 16:42:15.740047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.428 [2024-11-16 16:42:15.743345] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.428 [2024-11-16 16:42:15.743379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.428 [2024-11-16 16:42:15.743407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.428 [2024-11-16 16:42:15.747345] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.428 [2024-11-16 16:42:15.747377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.428 [2024-11-16 16:42:15.747404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.428 [2024-11-16 16:42:15.751999] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.428 [2024-11-16 16:42:15.752038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.428 [2024-11-16 16:42:15.752049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.428 [2024-11-16 16:42:15.756308] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.428 [2024-11-16 16:42:15.756372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.428 [2024-11-16 16:42:15.756392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.428 [2024-11-16 16:42:15.760930] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.428 [2024-11-16 16:42:15.760968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.428 [2024-11-16 16:42:15.760982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.428 [2024-11-16 16:42:15.764552] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.428 [2024-11-16 16:42:15.764588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.428 [2024-11-16 16:42:15.764616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.428 [2024-11-16 16:42:15.768459] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.428 [2024-11-16 16:42:15.768494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.428 [2024-11-16 16:42:15.768522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.428 [2024-11-16 16:42:15.772074] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.428 [2024-11-16 16:42:15.772107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.428 [2024-11-16 16:42:15.772134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.428 [2024-11-16 16:42:15.775350] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.428 [2024-11-16 16:42:15.775384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.428 [2024-11-16 16:42:15.775412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.428 [2024-11-16 16:42:15.779022] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.428 [2024-11-16 16:42:15.779082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.428 [2024-11-16 16:42:15.779095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.428 [2024-11-16 16:42:15.781885] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.428 [2024-11-16 16:42:15.781918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.428 [2024-11-16 16:42:15.781945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.428 [2024-11-16 16:42:15.785834] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.428 [2024-11-16 16:42:15.785868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.428 [2024-11-16 16:42:15.785895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.428 [2024-11-16 16:42:15.789798] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.428 [2024-11-16 16:42:15.789832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.428 [2024-11-16 16:42:15.789859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.428 [2024-11-16 16:42:15.793592] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.428 [2024-11-16 16:42:15.793659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.429 [2024-11-16 16:42:15.793687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.429 [2024-11-16 16:42:15.797497] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.429 [2024-11-16 16:42:15.797564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.429 [2024-11-16 16:42:15.797592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.429 [2024-11-16 16:42:15.801481] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.429 [2024-11-16 16:42:15.801548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.429 [2024-11-16 16:42:15.801576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.429 [2024-11-16 16:42:15.804955] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.429 [2024-11-16 16:42:15.804989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.429 [2024-11-16 16:42:15.805018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.429 [2024-11-16 16:42:15.808373] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.429 [2024-11-16 16:42:15.808408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.429 [2024-11-16 16:42:15.808435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.429 [2024-11-16 16:42:15.811726] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.429 [2024-11-16 16:42:15.811759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.429 [2024-11-16 16:42:15.811787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.429 [2024-11-16 16:42:15.815614] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.429 [2024-11-16 16:42:15.815649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.429 [2024-11-16 16:42:15.815677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.429 [2024-11-16 16:42:15.819159] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.429 [2024-11-16 16:42:15.819194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.429 [2024-11-16 16:42:15.819222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.429 [2024-11-16 16:42:15.822341] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.429 [2024-11-16 16:42:15.822377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.429 [2024-11-16 16:42:15.822405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.429 [2024-11-16 16:42:15.825887] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.429 [2024-11-16 16:42:15.825920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.429 [2024-11-16 16:42:15.825947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.429 [2024-11-16 16:42:15.830381] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.429 [2024-11-16 16:42:15.830416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.429 [2024-11-16 16:42:15.830443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.429 [2024-11-16 16:42:15.833776] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.429 [2024-11-16 16:42:15.833810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.429 [2024-11-16 16:42:15.833838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.429 [2024-11-16 16:42:15.837589] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.429 [2024-11-16 16:42:15.837638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.429 [2024-11-16 16:42:15.837681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.429 [2024-11-16 16:42:15.841455] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.429 [2024-11-16 16:42:15.841492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.429 [2024-11-16 16:42:15.841504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.429 [2024-11-16 16:42:15.844873] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.429 [2024-11-16 16:42:15.844907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.429 [2024-11-16 16:42:15.844935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.429 [2024-11-16 16:42:15.848578] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.429 [2024-11-16 16:42:15.848613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.429 [2024-11-16 16:42:15.848641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.429 [2024-11-16 16:42:15.852414] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.429 [2024-11-16 16:42:15.852449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.429 [2024-11-16 16:42:15.852477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.429 [2024-11-16 16:42:15.856581] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.429 [2024-11-16 16:42:15.856632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.429 [2024-11-16 16:42:15.856675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.429 [2024-11-16 16:42:15.860567] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.429 [2024-11-16 16:42:15.860602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.429 [2024-11-16 16:42:15.860630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.429 [2024-11-16 16:42:15.863566] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.429 [2024-11-16 16:42:15.863620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.429 [2024-11-16 16:42:15.863648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.429 [2024-11-16 16:42:15.867377] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.429 [2024-11-16 16:42:15.867443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.429 [2024-11-16 16:42:15.867470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.429 [2024-11-16 16:42:15.871726] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.429 [2024-11-16 16:42:15.871776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.429 [2024-11-16 16:42:15.871804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.429 [2024-11-16 16:42:15.875820] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.429 [2024-11-16 16:42:15.875869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.429 [2024-11-16 16:42:15.875897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.430 [2024-11-16 16:42:15.880174] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.430 [2024-11-16 16:42:15.880226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.430 [2024-11-16 16:42:15.880239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.430 [2024-11-16 16:42:15.884716] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.430 [2024-11-16 16:42:15.884766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.430 [2024-11-16 16:42:15.884795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.430 [2024-11-16 16:42:15.888540] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.430 [2024-11-16 16:42:15.888589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.430 [2024-11-16 16:42:15.888616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.430 [2024-11-16 16:42:15.893334] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.430 [2024-11-16 16:42:15.893386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.430 [2024-11-16 16:42:15.893399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.430 [2024-11-16 16:42:15.897330] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.430 [2024-11-16 16:42:15.897367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.430 [2024-11-16 16:42:15.897380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.430 [2024-11-16 16:42:15.901040] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.430 [2024-11-16 16:42:15.901101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.430 [2024-11-16 16:42:15.901129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.430 [2024-11-16 16:42:15.904620] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.430 [2024-11-16 16:42:15.904668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.430 [2024-11-16 16:42:15.904697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.430 [2024-11-16 16:42:15.908583] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.430 [2024-11-16 16:42:15.908632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.430 [2024-11-16 16:42:15.908660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.430 [2024-11-16 16:42:15.912968] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.430 [2024-11-16 16:42:15.913052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.430 [2024-11-16 16:42:15.913095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.690 [2024-11-16 16:42:15.917133] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.690 [2024-11-16 16:42:15.917211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.690 [2024-11-16 16:42:15.917225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.690 [2024-11-16 16:42:15.920778] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.690 [2024-11-16 16:42:15.920829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.690 [2024-11-16 16:42:15.920857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.690 [2024-11-16 16:42:15.924888] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.690 [2024-11-16 16:42:15.924930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.690 [2024-11-16 16:42:15.924958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.690 [2024-11-16 16:42:15.928324] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.690 [2024-11-16 16:42:15.928373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.690 [2024-11-16 16:42:15.928402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.690 [2024-11-16 16:42:15.932426] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.690 [2024-11-16 16:42:15.932475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.690 [2024-11-16 16:42:15.932503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.690 [2024-11-16 16:42:15.936200] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.690 [2024-11-16 16:42:15.936250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.690 [2024-11-16 16:42:15.936279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.690 [2024-11-16 16:42:15.940106] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.690 [2024-11-16 16:42:15.940166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.690 [2024-11-16 16:42:15.940194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.690 [2024-11-16 16:42:15.943228] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.690 [2024-11-16 16:42:15.943278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.690 [2024-11-16 16:42:15.943306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.690 [2024-11-16 16:42:15.946756] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.690 [2024-11-16 16:42:15.946808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.690 [2024-11-16 16:42:15.946837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.690 [2024-11-16 16:42:15.950762] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.690 [2024-11-16 16:42:15.950812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.690 [2024-11-16 16:42:15.950840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.690 [2024-11-16 16:42:15.954442] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.690 [2024-11-16 16:42:15.954492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.690 [2024-11-16 16:42:15.954519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.690 [2024-11-16 16:42:15.958162] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.690 [2024-11-16 16:42:15.958213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.690 [2024-11-16 16:42:15.958226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.690 [2024-11-16 16:42:15.962077] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.690 [2024-11-16 16:42:15.962135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.690 [2024-11-16 16:42:15.962163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.690 [2024-11-16 16:42:15.965575] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.690 [2024-11-16 16:42:15.965624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.690 [2024-11-16 16:42:15.965668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.690 [2024-11-16 16:42:15.970207] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.690 [2024-11-16 16:42:15.970256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.690 [2024-11-16 16:42:15.970285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.690 [2024-11-16 16:42:15.974272] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.690 [2024-11-16 16:42:15.974320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.690 [2024-11-16 16:42:15.974348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.690 [2024-11-16 16:42:15.978422] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.690 [2024-11-16 16:42:15.978471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.690 [2024-11-16 16:42:15.978499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.690 [2024-11-16 16:42:15.982774] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.690 [2024-11-16 16:42:15.982825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.691 [2024-11-16 16:42:15.982853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.691 [2024-11-16 16:42:15.986848] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.691 [2024-11-16 16:42:15.986896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.691 [2024-11-16 16:42:15.986924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.691 [2024-11-16 16:42:15.991099] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.691 [2024-11-16 16:42:15.991147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.691 [2024-11-16 16:42:15.991175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.691 [2024-11-16 16:42:15.994754] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.691 [2024-11-16 16:42:15.994804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.691 [2024-11-16 16:42:15.994832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.691 [2024-11-16 16:42:15.998942] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.691 [2024-11-16 16:42:15.998990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.691 [2024-11-16 16:42:15.999018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.691 [2024-11-16 16:42:16.002859] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.691 [2024-11-16 16:42:16.002909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.691 [2024-11-16 16:42:16.002937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.691 [2024-11-16 16:42:16.006395] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.691 [2024-11-16 16:42:16.006444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.691 [2024-11-16 16:42:16.006472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.691 [2024-11-16 16:42:16.010271] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.691 [2024-11-16 16:42:16.010308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.691 [2024-11-16 16:42:16.010321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.691 [2024-11-16 16:42:16.014202] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.691 [2024-11-16 16:42:16.014251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.691 [2024-11-16 16:42:16.014278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.691 [2024-11-16 16:42:16.017764] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.691 [2024-11-16 16:42:16.017812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.691 [2024-11-16 16:42:16.017840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.691 [2024-11-16 16:42:16.021870] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.691 [2024-11-16 16:42:16.021918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.691 [2024-11-16 16:42:16.021947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.691 [2024-11-16 16:42:16.026058] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.691 [2024-11-16 16:42:16.026117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.691 [2024-11-16 16:42:16.026145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.691 [2024-11-16 16:42:16.029581] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.691 [2024-11-16 16:42:16.029631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.691 [2024-11-16 16:42:16.029660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.691 [2024-11-16 16:42:16.033282] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.691 [2024-11-16 16:42:16.033332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.691 [2024-11-16 16:42:16.033345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.691 [2024-11-16 16:42:16.037243] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.691 [2024-11-16 16:42:16.037292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.691 [2024-11-16 16:42:16.037304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.691 [2024-11-16 16:42:16.040858] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.691 [2024-11-16 16:42:16.040891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.691 [2024-11-16 16:42:16.040920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.691 [2024-11-16 16:42:16.044406] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.691 [2024-11-16 16:42:16.044441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.691 [2024-11-16 16:42:16.044469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.691 [2024-11-16 16:42:16.047770] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.691 [2024-11-16 16:42:16.047804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.691 [2024-11-16 16:42:16.047832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.691 [2024-11-16 16:42:16.051723] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.691 [2024-11-16 16:42:16.051758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.691 [2024-11-16 16:42:16.051786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.691 [2024-11-16 16:42:16.055174] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.691 [2024-11-16 16:42:16.055207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.691 [2024-11-16 16:42:16.055235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.691 [2024-11-16 16:42:16.059009] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.691 [2024-11-16 16:42:16.059044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.691 [2024-11-16 16:42:16.059082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.691 [2024-11-16 16:42:16.062752] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.691 [2024-11-16 16:42:16.062785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.691 [2024-11-16 16:42:16.062813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.691 [2024-11-16 16:42:16.066887] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.691 [2024-11-16 16:42:16.066920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.691 [2024-11-16 16:42:16.066948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.691 [2024-11-16 16:42:16.070210] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.691 [2024-11-16 16:42:16.070243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.691 [2024-11-16 16:42:16.070272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.691 [2024-11-16 16:42:16.073631] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.691 [2024-11-16 16:42:16.073664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.691 [2024-11-16 16:42:16.073692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.691 [2024-11-16 16:42:16.077577] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.691 [2024-11-16 16:42:16.077610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.691 [2024-11-16 16:42:16.077638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.691 [2024-11-16 16:42:16.080647] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.691 [2024-11-16 16:42:16.080680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.691 [2024-11-16 16:42:16.080708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.691 [2024-11-16 16:42:16.084743] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.691 [2024-11-16 16:42:16.084778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.692 [2024-11-16 16:42:16.084806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.692 [2024-11-16 16:42:16.088140] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.692 [2024-11-16 16:42:16.088174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.692 [2024-11-16 16:42:16.088201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.692 [2024-11-16 16:42:16.091151] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.692 [2024-11-16 16:42:16.091185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.692 [2024-11-16 16:42:16.091213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.692 [2024-11-16 16:42:16.094948] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.692 [2024-11-16 16:42:16.094984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.692 [2024-11-16 16:42:16.095011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.692 [2024-11-16 16:42:16.098045] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.692 [2024-11-16 16:42:16.098090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.692 [2024-11-16 16:42:16.098118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.692 [2024-11-16 16:42:16.102121] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.692 [2024-11-16 16:42:16.102155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.692 [2024-11-16 16:42:16.102184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.692 [2024-11-16 16:42:16.106164] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.692 [2024-11-16 16:42:16.106198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.692 [2024-11-16 16:42:16.106226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.692 [2024-11-16 16:42:16.109532] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.692 [2024-11-16 16:42:16.109582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.692 [2024-11-16 16:42:16.109626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.692 [2024-11-16 16:42:16.113584] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.692 [2024-11-16 16:42:16.113618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.692 [2024-11-16 16:42:16.113645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.692 [2024-11-16 16:42:16.117250] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.692 [2024-11-16 16:42:16.117300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.692 [2024-11-16 16:42:16.117328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.692 [2024-11-16 16:42:16.120982] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.692 [2024-11-16 16:42:16.121014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.692 [2024-11-16 16:42:16.121042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.692 [2024-11-16 16:42:16.124440] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.692 [2024-11-16 16:42:16.124473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.692 [2024-11-16 16:42:16.124500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.692 [2024-11-16 16:42:16.128096] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.692 [2024-11-16 16:42:16.128129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.692 [2024-11-16 16:42:16.128157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.692 [2024-11-16 16:42:16.132005] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.692 [2024-11-16 16:42:16.132038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.692 [2024-11-16 16:42:16.132066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.692 [2024-11-16 16:42:16.135867] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.692 [2024-11-16 16:42:16.135900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.692 [2024-11-16 16:42:16.135927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.692 [2024-11-16 16:42:16.138388] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.692 [2024-11-16 16:42:16.138420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.692 [2024-11-16 16:42:16.138447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.692 [2024-11-16 16:42:16.142678] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.692 [2024-11-16 16:42:16.142713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.692 [2024-11-16 16:42:16.142741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.692 [2024-11-16 16:42:16.145976] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.692 [2024-11-16 16:42:16.146009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.692 [2024-11-16 16:42:16.146036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.692 [2024-11-16 16:42:16.149424] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.692 [2024-11-16 16:42:16.149460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.692 [2024-11-16 16:42:16.149487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.692 [2024-11-16 16:42:16.153428] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.692 [2024-11-16 16:42:16.153480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.692 [2024-11-16 16:42:16.153492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.692 [2024-11-16 16:42:16.157569] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.692 [2024-11-16 16:42:16.157633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.692 [2024-11-16 16:42:16.157660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.692 [2024-11-16 16:42:16.160792] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.692 [2024-11-16 16:42:16.160828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.692 [2024-11-16 16:42:16.160855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.692 [2024-11-16 16:42:16.164617] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.692 [2024-11-16 16:42:16.164652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.692 [2024-11-16 16:42:16.164679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.692 [2024-11-16 16:42:16.168071] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.692 [2024-11-16 16:42:16.168103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.692 [2024-11-16 16:42:16.168131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.692 [2024-11-16 16:42:16.171515] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.692 [2024-11-16 16:42:16.171547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.692 [2024-11-16 16:42:16.171575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.692 [2024-11-16 16:42:16.176332] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.692 [2024-11-16 16:42:16.176386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.692 [2024-11-16 16:42:16.176415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.952 [2024-11-16 16:42:16.180417] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.952 [2024-11-16 16:42:16.180454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.952 [2024-11-16 16:42:16.180484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.952 [2024-11-16 16:42:16.184860] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.952 [2024-11-16 16:42:16.184929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.952 [2024-11-16 16:42:16.184958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.952 [2024-11-16 16:42:16.189112] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.952 [2024-11-16 16:42:16.189162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.952 [2024-11-16 16:42:16.189174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.952 [2024-11-16 16:42:16.193153] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.952 [2024-11-16 16:42:16.193208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.952 [2024-11-16 16:42:16.193237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.952 [2024-11-16 16:42:16.196643] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.952 [2024-11-16 16:42:16.196676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.952 [2024-11-16 16:42:16.196703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.952 [2024-11-16 16:42:16.200787] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.952 [2024-11-16 16:42:16.200823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.952 [2024-11-16 16:42:16.200851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.952 [2024-11-16 16:42:16.204320] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.952 [2024-11-16 16:42:16.204355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.952 [2024-11-16 16:42:16.204381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.952 [2024-11-16 16:42:16.207513] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.952 [2024-11-16 16:42:16.207547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.952 [2024-11-16 16:42:16.207575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.952 [2024-11-16 16:42:16.211314] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.952 [2024-11-16 16:42:16.211348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.952 [2024-11-16 16:42:16.211375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.952 [2024-11-16 16:42:16.214740] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.952 [2024-11-16 16:42:16.214773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.952 [2024-11-16 16:42:16.214801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.952 [2024-11-16 16:42:16.218548] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.952 [2024-11-16 16:42:16.218581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.952 [2024-11-16 16:42:16.218608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.952 [2024-11-16 16:42:16.221963] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.952 [2024-11-16 16:42:16.221998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.952 [2024-11-16 16:42:16.222025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.952 [2024-11-16 16:42:16.225752] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.952 [2024-11-16 16:42:16.225815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.952 [2024-11-16 16:42:16.225842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.952 [2024-11-16 16:42:16.229683] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.952 [2024-11-16 16:42:16.229717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.952 [2024-11-16 16:42:16.229744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.952 [2024-11-16 16:42:16.232824] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.952 [2024-11-16 16:42:16.232858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.952 [2024-11-16 16:42:16.232886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.952 [2024-11-16 16:42:16.236713] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.952 [2024-11-16 16:42:16.236749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.952 [2024-11-16 16:42:16.236776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.952 [2024-11-16 16:42:16.240438] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.952 [2024-11-16 16:42:16.240473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.952 [2024-11-16 16:42:16.240500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.952 [2024-11-16 16:42:16.243883] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.952 [2024-11-16 16:42:16.243916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.952 [2024-11-16 16:42:16.243943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.952 [2024-11-16 16:42:16.247390] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.952 [2024-11-16 16:42:16.247426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.953 [2024-11-16 16:42:16.247455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.953 [2024-11-16 16:42:16.250898] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.953 [2024-11-16 16:42:16.250933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.953 [2024-11-16 16:42:16.250960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.953 [2024-11-16 16:42:16.254801] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.953 [2024-11-16 16:42:16.254836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.953 [2024-11-16 16:42:16.254864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.953 [2024-11-16 16:42:16.258444] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.953 [2024-11-16 16:42:16.258479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.953 [2024-11-16 16:42:16.258506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.953 [2024-11-16 16:42:16.261889] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.953 [2024-11-16 16:42:16.261924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.953 [2024-11-16 16:42:16.261951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.953 [2024-11-16 16:42:16.265493] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.953 [2024-11-16 16:42:16.265558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.953 [2024-11-16 16:42:16.265587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.953 [2024-11-16 16:42:16.269045] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.953 [2024-11-16 16:42:16.269090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.953 [2024-11-16 16:42:16.269118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.953 [2024-11-16 16:42:16.272678] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.953 [2024-11-16 16:42:16.272712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.953 [2024-11-16 16:42:16.272739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.953 [2024-11-16 16:42:16.276640] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.953 [2024-11-16 16:42:16.276673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.953 [2024-11-16 16:42:16.276700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.953 [2024-11-16 16:42:16.280330] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.953 [2024-11-16 16:42:16.280364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.953 [2024-11-16 16:42:16.280391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.953 [2024-11-16 16:42:16.284175] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.953 [2024-11-16 16:42:16.284209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.953 [2024-11-16 16:42:16.284237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.953 [2024-11-16 16:42:16.287745] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.953 [2024-11-16 16:42:16.287778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.953 [2024-11-16 16:42:16.287805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.953 [2024-11-16 16:42:16.290817] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.953 [2024-11-16 16:42:16.290850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.953 [2024-11-16 16:42:16.290877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.953 [2024-11-16 16:42:16.294309] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.953 [2024-11-16 16:42:16.294343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.953 [2024-11-16 16:42:16.294370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.953 [2024-11-16 16:42:16.297527] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.953 [2024-11-16 16:42:16.297590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.953 [2024-11-16 16:42:16.297634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.953 [2024-11-16 16:42:16.301535] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.953 [2024-11-16 16:42:16.301585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.953 [2024-11-16 16:42:16.301613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.953 [2024-11-16 16:42:16.305559] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.953 [2024-11-16 16:42:16.305621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.953 [2024-11-16 16:42:16.305648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.953 [2024-11-16 16:42:16.309018] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe77d10) 00:22:38.953 [2024-11-16 16:42:16.309051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.953 [2024-11-16 16:42:16.309090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.953 00:22:38.953 Latency(us) 00:22:38.953 [2024-11-16T16:42:16.444Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:38.953 [2024-11-16T16:42:16.444Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:22:38.953 nvme0n1 : 2.00 8272.88 1034.11 0.00 0.00 1930.89 506.41 5034.36 00:22:38.953 [2024-11-16T16:42:16.444Z] =================================================================================================================== 00:22:38.953 [2024-11-16T16:42:16.444Z] Total : 8272.88 1034.11 0.00 0.00 1930.89 506.41 5034.36 00:22:38.953 0 00:22:38.953 16:42:16 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:38.953 16:42:16 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:38.953 16:42:16 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:38.953 16:42:16 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:38.953 | .driver_specific 00:22:38.953 | .nvme_error 00:22:38.953 | .status_code 00:22:38.953 | .command_transient_transport_error' 00:22:39.212 16:42:16 -- host/digest.sh@71 -- # (( 534 > 0 )) 00:22:39.212 16:42:16 -- host/digest.sh@73 -- # killprocess 98031 00:22:39.212 16:42:16 -- common/autotest_common.sh@936 -- # '[' -z 98031 ']' 00:22:39.212 16:42:16 -- common/autotest_common.sh@940 -- # kill -0 98031 00:22:39.212 16:42:16 -- common/autotest_common.sh@941 -- # uname 00:22:39.212 16:42:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:39.212 16:42:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 98031 00:22:39.212 killing process with pid 98031 00:22:39.212 Received shutdown signal, test time was about 2.000000 seconds 00:22:39.212 00:22:39.212 Latency(us) 00:22:39.212 [2024-11-16T16:42:16.703Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:39.212 [2024-11-16T16:42:16.703Z] =================================================================================================================== 00:22:39.212 [2024-11-16T16:42:16.703Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:39.212 16:42:16 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:39.212 16:42:16 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:39.212 16:42:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 98031' 00:22:39.212 16:42:16 -- common/autotest_common.sh@955 -- # kill 98031 00:22:39.212 16:42:16 -- common/autotest_common.sh@960 -- # wait 98031 00:22:39.471 16:42:16 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:22:39.471 16:42:16 -- host/digest.sh@54 -- # local rw bs qd 00:22:39.471 16:42:16 -- host/digest.sh@56 -- # rw=randwrite 00:22:39.471 16:42:16 -- host/digest.sh@56 -- # bs=4096 00:22:39.471 16:42:16 -- host/digest.sh@56 -- # qd=128 00:22:39.471 16:42:16 -- host/digest.sh@58 -- # bperfpid=98116 00:22:39.471 16:42:16 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:22:39.471 16:42:16 -- host/digest.sh@60 -- # waitforlisten 98116 /var/tmp/bperf.sock 00:22:39.471 16:42:16 -- common/autotest_common.sh@829 -- # '[' -z 98116 ']' 00:22:39.471 16:42:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:39.471 16:42:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:39.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:39.471 16:42:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:39.471 16:42:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:39.471 16:42:16 -- common/autotest_common.sh@10 -- # set +x 00:22:39.471 [2024-11-16 16:42:16.813446] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:39.471 [2024-11-16 16:42:16.813554] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98116 ] 00:22:39.471 [2024-11-16 16:42:16.945739] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:39.729 [2024-11-16 16:42:17.007449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:40.295 16:42:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:40.295 16:42:17 -- common/autotest_common.sh@862 -- # return 0 00:22:40.295 16:42:17 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:40.295 16:42:17 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:40.553 16:42:17 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:40.553 16:42:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.553 16:42:17 -- common/autotest_common.sh@10 -- # set +x 00:22:40.553 16:42:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.553 16:42:17 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:40.553 16:42:17 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:40.811 nvme0n1 00:22:40.811 16:42:18 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:22:40.811 16:42:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.811 16:42:18 -- common/autotest_common.sh@10 -- # set +x 00:22:40.811 16:42:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.811 16:42:18 -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:40.811 16:42:18 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:40.811 Running I/O for 2 seconds... 00:22:40.811 [2024-11-16 16:42:18.278949] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190eea00 00:22:40.811 [2024-11-16 16:42:18.279183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.811 [2024-11-16 16:42:18.279212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:40.811 [2024-11-16 16:42:18.289130] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190eee38 00:22:40.811 [2024-11-16 16:42:18.290405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.811 [2024-11-16 16:42:18.290456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.811 [2024-11-16 16:42:18.298905] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190ed920 00:22:40.811 [2024-11-16 16:42:18.299864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:17125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:40.811 [2024-11-16 16:42:18.299915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:41.070 [2024-11-16 16:42:18.305936] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190ee5c8 00:22:41.070 [2024-11-16 16:42:18.306168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:5604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.070 [2024-11-16 16:42:18.306199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:41.070 [2024-11-16 16:42:18.315880] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190e5a90 00:22:41.070 [2024-11-16 16:42:18.316417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:20243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.070 [2024-11-16 16:42:18.316453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:41.070 [2024-11-16 16:42:18.326068] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190edd58 00:22:41.070 [2024-11-16 16:42:18.327072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:20099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.070 [2024-11-16 16:42:18.327113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:41.071 [2024-11-16 16:42:18.333353] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190fcdd0 00:22:41.071 [2024-11-16 16:42:18.334264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:18023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.071 [2024-11-16 16:42:18.334311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:41.071 [2024-11-16 16:42:18.343518] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190eee38 00:22:41.071 [2024-11-16 16:42:18.344252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:16404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.071 [2024-11-16 16:42:18.344286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:41.071 [2024-11-16 16:42:18.351586] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190f2948 00:22:41.071 [2024-11-16 16:42:18.352622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:14882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.071 [2024-11-16 16:42:18.352670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:41.071 [2024-11-16 16:42:18.359984] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190e9168 00:22:41.071 [2024-11-16 16:42:18.360332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:20511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.071 [2024-11-16 16:42:18.360366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:41.071 [2024-11-16 16:42:18.371434] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190de470 00:22:41.071 [2024-11-16 16:42:18.372437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:15152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.071 [2024-11-16 16:42:18.372483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:41.071 [2024-11-16 16:42:18.378109] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190e0ea0 00:22:41.071 [2024-11-16 16:42:18.378393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.071 [2024-11-16 16:42:18.378420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:41.071 [2024-11-16 16:42:18.389279] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190f6cc8 00:22:41.071 [2024-11-16 16:42:18.390095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:24856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.071 [2024-11-16 16:42:18.390138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:41.071 [2024-11-16 16:42:18.397729] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190df118 00:22:41.071 [2024-11-16 16:42:18.399230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.071 [2024-11-16 16:42:18.399278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.071 [2024-11-16 16:42:18.407266] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190f6020 00:22:41.071 [2024-11-16 16:42:18.407825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:11245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.071 [2024-11-16 16:42:18.407858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:41.071 [2024-11-16 16:42:18.415155] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190f8a50 00:22:41.071 [2024-11-16 16:42:18.416272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:7816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.071 [2024-11-16 16:42:18.416319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:41.071 [2024-11-16 16:42:18.424651] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190f7da8 00:22:41.071 [2024-11-16 16:42:18.425803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.071 [2024-11-16 16:42:18.425838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:41.071 [2024-11-16 16:42:18.433628] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190de470 00:22:41.071 [2024-11-16 16:42:18.434315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:23970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.071 [2024-11-16 16:42:18.434377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:41.071 [2024-11-16 16:42:18.442298] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190e2c28 00:22:41.071 [2024-11-16 16:42:18.443634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.071 [2024-11-16 16:42:18.443671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:41.071 [2024-11-16 16:42:18.450941] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190f35f0 00:22:41.071 [2024-11-16 16:42:18.451878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:6930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.071 [2024-11-16 16:42:18.451925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:41.071 [2024-11-16 16:42:18.460129] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190f46d0 00:22:41.071 [2024-11-16 16:42:18.460426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:2952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.071 [2024-11-16 16:42:18.460454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:41.071 [2024-11-16 16:42:18.468966] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190fef90 00:22:41.071 [2024-11-16 16:42:18.469467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:18304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.071 [2024-11-16 16:42:18.469504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:41.071 [2024-11-16 16:42:18.477740] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190fbcf0 00:22:41.071 [2024-11-16 16:42:18.478194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:12661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.071 [2024-11-16 16:42:18.478227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:41.071 [2024-11-16 16:42:18.486782] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190fbcf0 00:22:41.071 [2024-11-16 16:42:18.487404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.071 [2024-11-16 16:42:18.487453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:41.071 [2024-11-16 16:42:18.495545] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190f4298 00:22:41.071 [2024-11-16 16:42:18.495980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:12961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.071 [2024-11-16 16:42:18.496014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:41.071 [2024-11-16 16:42:18.504335] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190e5ec8 00:22:41.071 [2024-11-16 16:42:18.504754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:25445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.071 [2024-11-16 16:42:18.504787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:41.071 [2024-11-16 16:42:18.513038] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190e6738 00:22:41.071 [2024-11-16 16:42:18.514476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.071 [2024-11-16 16:42:18.514508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:41.071 [2024-11-16 16:42:18.522109] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190ec840 00:22:41.071 [2024-11-16 16:42:18.522397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:21472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.071 [2024-11-16 16:42:18.522425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:41.071 [2024-11-16 16:42:18.531086] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190dfdc0 00:22:41.071 [2024-11-16 16:42:18.531547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.071 [2024-11-16 16:42:18.531580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:41.071 [2024-11-16 16:42:18.540097] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190ec840 00:22:41.071 [2024-11-16 16:42:18.541032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:4931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.071 [2024-11-16 16:42:18.541074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:41.071 [2024-11-16 16:42:18.548518] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190e23b8 00:22:41.071 [2024-11-16 16:42:18.549423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:4668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.071 [2024-11-16 16:42:18.549472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:41.331 [2024-11-16 16:42:18.559359] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190eee38 00:22:41.331 [2024-11-16 16:42:18.559993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:3979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.331 [2024-11-16 16:42:18.560048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:41.331 [2024-11-16 16:42:18.567135] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190e4140 00:22:41.331 [2024-11-16 16:42:18.567899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.331 [2024-11-16 16:42:18.567935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:41.331 [2024-11-16 16:42:18.576045] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190df550 00:22:41.331 [2024-11-16 16:42:18.577671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:15425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.331 [2024-11-16 16:42:18.577705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:41.331 [2024-11-16 16:42:18.584817] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190e5658 00:22:41.331 [2024-11-16 16:42:18.585800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.331 [2024-11-16 16:42:18.585834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:41.331 [2024-11-16 16:42:18.594975] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190ebfd0 00:22:41.331 [2024-11-16 16:42:18.595737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:4599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.331 [2024-11-16 16:42:18.595772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:41.331 [2024-11-16 16:42:18.603027] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190eff18 00:22:41.331 [2024-11-16 16:42:18.604423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:25241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.331 [2024-11-16 16:42:18.604473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:41.331 [2024-11-16 16:42:18.611699] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190e6b70 00:22:41.331 [2024-11-16 16:42:18.612783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:10416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.331 [2024-11-16 16:42:18.612832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:41.331 [2024-11-16 16:42:18.620834] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190ec408 00:22:41.331 [2024-11-16 16:42:18.621896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.331 [2024-11-16 16:42:18.621932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:41.331 [2024-11-16 16:42:18.629369] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190f7970 00:22:41.331 [2024-11-16 16:42:18.630577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:9553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.331 [2024-11-16 16:42:18.630609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:41.332 [2024-11-16 16:42:18.638631] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190f0bc0 00:22:41.332 [2024-11-16 16:42:18.639160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.332 [2024-11-16 16:42:18.639196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:41.332 [2024-11-16 16:42:18.647526] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190e4de8 00:22:41.332 [2024-11-16 16:42:18.648897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:25108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.332 [2024-11-16 16:42:18.648929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:41.332 [2024-11-16 16:42:18.656504] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190ee5c8 00:22:41.332 [2024-11-16 16:42:18.656911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:25055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.332 [2024-11-16 16:42:18.656942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:41.332 [2024-11-16 16:42:18.665401] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190f1868 00:22:41.332 [2024-11-16 16:42:18.665988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:15860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.332 [2024-11-16 16:42:18.666021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:41.332 [2024-11-16 16:42:18.674517] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190f6458 00:22:41.332 [2024-11-16 16:42:18.675081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:19763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.332 [2024-11-16 16:42:18.675124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:41.332 [2024-11-16 16:42:18.683187] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190e5a90 00:22:41.332 [2024-11-16 16:42:18.684409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:6945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.332 [2024-11-16 16:42:18.684441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:41.332 [2024-11-16 16:42:18.691960] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190f57b0 00:22:41.332 [2024-11-16 16:42:18.693420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:17779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.332 [2024-11-16 16:42:18.693455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:41.332 [2024-11-16 16:42:18.701896] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190f6890 00:22:41.332 [2024-11-16 16:42:18.702760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.332 [2024-11-16 16:42:18.702793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:41.332 [2024-11-16 16:42:18.708498] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190f7100 00:22:41.332 [2024-11-16 16:42:18.708628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:13148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.332 [2024-11-16 16:42:18.708648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:41.332 [2024-11-16 16:42:18.719468] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190ee190 00:22:41.332 [2024-11-16 16:42:18.720138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:20921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.332 [2024-11-16 16:42:18.720170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:41.332 [2024-11-16 16:42:18.727802] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190e12d8 00:22:41.332 [2024-11-16 16:42:18.729052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:24600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.332 [2024-11-16 16:42:18.729107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:41.332 [2024-11-16 16:42:18.736957] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190edd58 00:22:41.332 [2024-11-16 16:42:18.737425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:23411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.332 [2024-11-16 16:42:18.737458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:41.332 [2024-11-16 16:42:18.745547] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190ee190 00:22:41.332 [2024-11-16 16:42:18.746402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:17448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.332 [2024-11-16 16:42:18.746435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:41.332 [2024-11-16 16:42:18.754151] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190ee190 00:22:41.332 [2024-11-16 16:42:18.755092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:21747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.332 [2024-11-16 16:42:18.755134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:41.332 [2024-11-16 16:42:18.764112] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190e8d30 00:22:41.332 [2024-11-16 16:42:18.765478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:9564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.332 [2024-11-16 16:42:18.765512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:41.332 [2024-11-16 16:42:18.772869] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190e4de8 00:22:41.332 [2024-11-16 16:42:18.773761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.332 [2024-11-16 16:42:18.773792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:41.332 [2024-11-16 16:42:18.781463] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190e6738 00:22:41.332 [2024-11-16 16:42:18.782838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.332 [2024-11-16 16:42:18.782871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:41.332 [2024-11-16 16:42:18.790036] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190eea00 00:22:41.332 [2024-11-16 16:42:18.791254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:9998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.332 [2024-11-16 16:42:18.791285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:41.332 [2024-11-16 16:42:18.799197] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190ebb98 00:22:41.332 [2024-11-16 16:42:18.799597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:23936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.332 [2024-11-16 16:42:18.799631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:41.332 [2024-11-16 16:42:18.808036] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190e8088 00:22:41.332 [2024-11-16 16:42:18.808613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:3344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.332 [2024-11-16 16:42:18.808647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:41.332 [2024-11-16 16:42:18.815712] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190f8618 00:22:41.332 [2024-11-16 16:42:18.815896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:24052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.332 [2024-11-16 16:42:18.815918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:41.592 [2024-11-16 16:42:18.827237] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190e0630 00:22:41.592 [2024-11-16 16:42:18.827922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:7040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.592 [2024-11-16 16:42:18.827958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:41.592 [2024-11-16 16:42:18.835111] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190ebb98 00:22:41.592 [2024-11-16 16:42:18.836379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:9075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.592 [2024-11-16 16:42:18.836414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:41.592 [2024-11-16 16:42:18.843706] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190ea248 00:22:41.592 [2024-11-16 16:42:18.844665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:24075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.592 [2024-11-16 16:42:18.844698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:41.592 [2024-11-16 16:42:18.852929] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190e38d0 00:22:41.592 [2024-11-16 16:42:18.853461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.592 [2024-11-16 16:42:18.853498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:41.592 [2024-11-16 16:42:18.861914] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190f6cc8 00:22:41.592 [2024-11-16 16:42:18.863287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:9453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.592 [2024-11-16 16:42:18.863335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:41.592 [2024-11-16 16:42:18.870552] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190ebb98 00:22:41.592 [2024-11-16 16:42:18.871559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:18132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.592 [2024-11-16 16:42:18.871591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:41.592 [2024-11-16 16:42:18.879661] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190f0350 00:22:41.592 [2024-11-16 16:42:18.880051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:16485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.592 [2024-11-16 16:42:18.880095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:41.592 [2024-11-16 16:42:18.888610] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190fc998 00:22:41.592 [2024-11-16 16:42:18.889161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.592 [2024-11-16 16:42:18.889215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:41.592 [2024-11-16 16:42:18.897346] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190efae0 00:22:41.592 [2024-11-16 16:42:18.897867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:2856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.592 [2024-11-16 16:42:18.897900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:41.592 [2024-11-16 16:42:18.906133] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190e7c50 00:22:41.592 [2024-11-16 16:42:18.906621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:8693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.592 [2024-11-16 16:42:18.906655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:41.592 [2024-11-16 16:42:18.914848] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190eb760 00:22:41.592 [2024-11-16 16:42:18.915325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:3018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.592 [2024-11-16 16:42:18.915357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:41.592 [2024-11-16 16:42:18.923592] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190f1430 00:22:41.592 [2024-11-16 16:42:18.924032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.592 [2024-11-16 16:42:18.924076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:41.592 [2024-11-16 16:42:18.932313] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190fc998 00:22:41.592 [2024-11-16 16:42:18.932764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:16691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.592 [2024-11-16 16:42:18.932796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:41.592 [2024-11-16 16:42:18.941047] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190ec840 00:22:41.592 [2024-11-16 16:42:18.941614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:17361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.592 [2024-11-16 16:42:18.941648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:41.592 [2024-11-16 16:42:18.948807] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190e49b0 00:22:41.592 [2024-11-16 16:42:18.948917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:13842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.592 [2024-11-16 16:42:18.948937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:41.592 [2024-11-16 16:42:18.959837] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190ea680 00:22:41.592 [2024-11-16 16:42:18.960470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.592 [2024-11-16 16:42:18.960519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:41.592 [2024-11-16 16:42:18.968924] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190f1868 00:22:41.592 [2024-11-16 16:42:18.969913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:5955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.592 [2024-11-16 16:42:18.969946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:41.592 [2024-11-16 16:42:18.977558] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190e1710 00:22:41.592 [2024-11-16 16:42:18.978623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:13852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.592 [2024-11-16 16:42:18.978655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:41.592 [2024-11-16 16:42:18.985845] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190ec840 00:22:41.592 [2024-11-16 16:42:18.987000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.592 [2024-11-16 16:42:18.987031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:41.592 [2024-11-16 16:42:18.996511] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190ddc00 00:22:41.592 [2024-11-16 16:42:18.997413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:11534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.592 [2024-11-16 16:42:18.997446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:41.592 [2024-11-16 16:42:19.004337] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190f1868 00:22:41.592 [2024-11-16 16:42:19.005814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.592 [2024-11-16 16:42:19.005846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:41.592 [2024-11-16 16:42:19.012946] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190fda78 00:22:41.592 [2024-11-16 16:42:19.014278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.592 [2024-11-16 16:42:19.014326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:41.592 [2024-11-16 16:42:19.021852] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190ef270 00:22:41.593 [2024-11-16 16:42:19.022929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:25007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.593 [2024-11-16 16:42:19.022961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:41.593 [2024-11-16 16:42:19.032624] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190de038 00:22:41.593 [2024-11-16 16:42:19.033632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:12858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.593 [2024-11-16 16:42:19.033662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:41.593 [2024-11-16 16:42:19.038922] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190f1430 00:22:41.593 [2024-11-16 16:42:19.039087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.593 [2024-11-16 16:42:19.039106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:41.593 [2024-11-16 16:42:19.047756] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190fe2e8 00:22:41.593 [2024-11-16 16:42:19.048303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:38 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.593 [2024-11-16 16:42:19.048336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:41.593 [2024-11-16 16:42:19.056504] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190ebfd0 00:22:41.593 [2024-11-16 16:42:19.056794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:8107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.593 [2024-11-16 16:42:19.056818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:41.593 [2024-11-16 16:42:19.065263] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190e95a0 00:22:41.593 [2024-11-16 16:42:19.065536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:1068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.593 [2024-11-16 16:42:19.065581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:41.593 [2024-11-16 16:42:19.073999] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190e1710 00:22:41.593 [2024-11-16 16:42:19.074238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:17251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.593 [2024-11-16 16:42:19.074273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:41.852 [2024-11-16 16:42:19.085348] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190e5ec8 00:22:41.852 [2024-11-16 16:42:19.086215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:25267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.852 [2024-11-16 16:42:19.086249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:41.852 [2024-11-16 16:42:19.091907] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190f4b08 00:22:41.852 [2024-11-16 16:42:19.092040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:16858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.852 [2024-11-16 16:42:19.092060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:41.852 [2024-11-16 16:42:19.101664] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190f2510 00:22:41.852 [2024-11-16 16:42:19.102326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:24635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.852 [2024-11-16 16:42:19.102377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:41.852 [2024-11-16 16:42:19.110330] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190fc560 00:22:41.852 [2024-11-16 16:42:19.111556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:9545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.852 [2024-11-16 16:42:19.111592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:41.852 [2024-11-16 16:42:19.118915] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190e99d8 00:22:41.852 [2024-11-16 16:42:19.119968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.852 [2024-11-16 16:42:19.120001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:41.852 [2024-11-16 16:42:19.128082] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190e8d30 00:22:41.852 [2024-11-16 16:42:19.129141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:15112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.852 [2024-11-16 16:42:19.129206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:41.852 [2024-11-16 16:42:19.138645] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190e0630 00:22:41.852 [2024-11-16 16:42:19.139254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:1392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.852 [2024-11-16 16:42:19.139287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:41.852 [2024-11-16 16:42:19.146284] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190fac10 00:22:41.852 [2024-11-16 16:42:19.147290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:5 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.852 [2024-11-16 16:42:19.147321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:41.852 [2024-11-16 16:42:19.154911] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190fe2e8 00:22:41.852 [2024-11-16 16:42:19.156110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:5905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.852 [2024-11-16 16:42:19.156156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:41.852 [2024-11-16 16:42:19.163825] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190ec408 00:22:41.852 [2024-11-16 16:42:19.164168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.852 [2024-11-16 16:42:19.164196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:41.852 [2024-11-16 16:42:19.174826] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190f81e0 00:22:41.852 [2024-11-16 16:42:19.175686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.852 [2024-11-16 16:42:19.175716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:41.852 [2024-11-16 16:42:19.182611] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190ea248 00:22:41.852 [2024-11-16 16:42:19.184039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:23786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.852 [2024-11-16 16:42:19.184081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:41.852 [2024-11-16 16:42:19.191198] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190e4de8 00:22:41.852 [2024-11-16 16:42:19.192282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.852 [2024-11-16 16:42:19.192314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:41.852 [2024-11-16 16:42:19.200421] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190ebb98 00:22:41.852 [2024-11-16 16:42:19.201079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.852 [2024-11-16 16:42:19.201125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:41.852 [2024-11-16 16:42:19.208143] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190dece0 00:22:41.852 [2024-11-16 16:42:19.208373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.852 [2024-11-16 16:42:19.208392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:41.852 [2024-11-16 16:42:19.219675] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190f1868 00:22:41.852 [2024-11-16 16:42:19.220658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:7293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.852 [2024-11-16 16:42:19.220690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:41.852 [2024-11-16 16:42:19.226020] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190f8e88 00:22:41.852 [2024-11-16 16:42:19.226174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.852 [2024-11-16 16:42:19.226193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:41.852 [2024-11-16 16:42:19.234872] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190f96f8 00:22:41.852 [2024-11-16 16:42:19.235174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:10570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.852 [2024-11-16 16:42:19.235199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:41.852 [2024-11-16 16:42:19.243575] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190e3d08 00:22:41.852 [2024-11-16 16:42:19.243825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:12674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.852 [2024-11-16 16:42:19.243871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:41.852 [2024-11-16 16:42:19.252301] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190efae0 00:22:41.852 [2024-11-16 16:42:19.252527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:16497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.852 [2024-11-16 16:42:19.252546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:41.852 [2024-11-16 16:42:19.261030] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190f7da8 00:22:41.852 [2024-11-16 16:42:19.261268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.852 [2024-11-16 16:42:19.261288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:41.852 [2024-11-16 16:42:19.271243] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190eaef0 00:22:41.852 [2024-11-16 16:42:19.272548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:18468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.852 [2024-11-16 16:42:19.272582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.853 [2024-11-16 16:42:19.280309] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190f0350 00:22:41.853 [2024-11-16 16:42:19.281606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:16724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.853 [2024-11-16 16:42:19.281638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.853 [2024-11-16 16:42:19.288189] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190eea00 00:22:41.853 [2024-11-16 16:42:19.288635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.853 [2024-11-16 16:42:19.288668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:41.853 [2024-11-16 16:42:19.297226] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190e5a90 00:22:41.853 [2024-11-16 16:42:19.298507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:11345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.853 [2024-11-16 16:42:19.298556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:41.853 [2024-11-16 16:42:19.306758] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190eaef0 00:22:41.853 [2024-11-16 16:42:19.307320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:21702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.853 [2024-11-16 16:42:19.307353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:41.853 [2024-11-16 16:42:19.315496] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190e49b0 00:22:41.853 [2024-11-16 16:42:19.316034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.853 [2024-11-16 16:42:19.316077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:41.853 [2024-11-16 16:42:19.324211] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190e1710 00:22:41.853 [2024-11-16 16:42:19.324734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:3640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.853 [2024-11-16 16:42:19.324768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:41.853 [2024-11-16 16:42:19.331865] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190f8a50 00:22:41.853 [2024-11-16 16:42:19.332836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:12498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.853 [2024-11-16 16:42:19.332868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:42.112 [2024-11-16 16:42:19.341736] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190de038 00:22:42.112 [2024-11-16 16:42:19.344060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:3355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.112 [2024-11-16 16:42:19.344176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.112 [2024-11-16 16:42:19.352489] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190e12d8 00:22:42.112 [2024-11-16 16:42:19.353292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:6895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.112 [2024-11-16 16:42:19.353326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:42.112 [2024-11-16 16:42:19.360510] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190f92c0 00:22:42.112 [2024-11-16 16:42:19.362011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.112 [2024-11-16 16:42:19.362045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:42.112 [2024-11-16 16:42:19.369557] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190f81e0 00:22:42.112 [2024-11-16 16:42:19.369766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.112 [2024-11-16 16:42:19.369801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:42.112 [2024-11-16 16:42:19.378446] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190ebb98 00:22:42.112 [2024-11-16 16:42:19.378849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.112 [2024-11-16 16:42:19.378885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:42.112 [2024-11-16 16:42:19.387409] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190f81e0 00:22:42.112 [2024-11-16 16:42:19.388666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:10934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.112 [2024-11-16 16:42:19.388700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:42.112 [2024-11-16 16:42:19.396010] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190f8a50 00:22:42.112 [2024-11-16 16:42:19.396955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.112 [2024-11-16 16:42:19.396987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:42.112 [2024-11-16 16:42:19.405237] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190df118 00:22:42.112 [2024-11-16 16:42:19.405560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.112 [2024-11-16 16:42:19.405594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:42.112 [2024-11-16 16:42:19.414187] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190e6738 00:22:42.112 [2024-11-16 16:42:19.414640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.112 [2024-11-16 16:42:19.414675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:42.112 [2024-11-16 16:42:19.422910] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190fe720 00:22:42.112 [2024-11-16 16:42:19.423350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:14494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.112 [2024-11-16 16:42:19.423385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:42.112 [2024-11-16 16:42:19.431660] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190e0630 00:22:42.112 [2024-11-16 16:42:19.432060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.112 [2024-11-16 16:42:19.432106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:42.112 [2024-11-16 16:42:19.440378] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190fa3a0 00:22:42.112 [2024-11-16 16:42:19.440770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:22737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.112 [2024-11-16 16:42:19.440807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:42.112 [2024-11-16 16:42:19.449135] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190e4140 00:22:42.112 [2024-11-16 16:42:19.449491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:17492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.112 [2024-11-16 16:42:19.449526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:42.112 [2024-11-16 16:42:19.458171] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190eb328 00:22:42.112 [2024-11-16 16:42:19.458930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:21726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.112 [2024-11-16 16:42:19.458994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:42.112 [2024-11-16 16:42:19.466963] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190fd640 00:22:42.112 [2024-11-16 16:42:19.467335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.112 [2024-11-16 16:42:19.467370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:42.112 [2024-11-16 16:42:19.475819] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190e27f0 00:22:42.112 [2024-11-16 16:42:19.476256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:22285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.112 [2024-11-16 16:42:19.476290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:42.112 [2024-11-16 16:42:19.484679] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190f0350 00:22:42.112 [2024-11-16 16:42:19.485131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:6250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.112 [2024-11-16 16:42:19.485164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:42.113 [2024-11-16 16:42:19.493525] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190f1868 00:22:42.113 [2024-11-16 16:42:19.494354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:24926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.113 [2024-11-16 16:42:19.494403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:42.113 [2024-11-16 16:42:19.502360] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190ec840 00:22:42.113 [2024-11-16 16:42:19.503150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:22258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.113 [2024-11-16 16:42:19.503212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:42.113 [2024-11-16 16:42:19.511182] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190fb048 00:22:42.113 [2024-11-16 16:42:19.512249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:22911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.113 [2024-11-16 16:42:19.512299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:42.113 [2024-11-16 16:42:19.522858] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190e99d8 00:22:42.113 [2024-11-16 16:42:19.523724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:6813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.113 [2024-11-16 16:42:19.523771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:42.113 [2024-11-16 16:42:19.529700] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190e38d0 00:22:42.113 [2024-11-16 16:42:19.529829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:21431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.113 [2024-11-16 16:42:19.529847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:42.113 [2024-11-16 16:42:19.540597] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190ff3c8 00:22:42.113 [2024-11-16 16:42:19.541136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:14762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.113 [2024-11-16 16:42:19.541170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:42.113 [2024-11-16 16:42:19.549532] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190e0a68 00:22:42.113 [2024-11-16 16:42:19.550239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:4310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.113 [2024-11-16 16:42:19.550287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:42.113 [2024-11-16 16:42:19.558300] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190e95a0 00:22:42.113 [2024-11-16 16:42:19.558971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:14329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.113 [2024-11-16 16:42:19.559033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:42.113 [2024-11-16 16:42:19.567036] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190f4b08 00:22:42.113 [2024-11-16 16:42:19.567671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.113 [2024-11-16 16:42:19.567733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:42.113 [2024-11-16 16:42:19.575811] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190f8618 00:22:42.113 [2024-11-16 16:42:19.576463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:20353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.113 [2024-11-16 16:42:19.576545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:42.113 [2024-11-16 16:42:19.583502] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190ddc00 00:22:42.113 [2024-11-16 16:42:19.583750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:1649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.113 [2024-11-16 16:42:19.583807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:42.113 [2024-11-16 16:42:19.594667] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190e8d30 00:22:42.113 [2024-11-16 16:42:19.595444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.113 [2024-11-16 16:42:19.595490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:42.372 [2024-11-16 16:42:19.602541] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190de038 00:22:42.372 [2024-11-16 16:42:19.603593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.372 [2024-11-16 16:42:19.603644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:42.372 [2024-11-16 16:42:19.611969] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190e3d08 00:22:42.372 [2024-11-16 16:42:19.612504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.372 [2024-11-16 16:42:19.612541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:42.372 [2024-11-16 16:42:19.620692] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190df550 00:22:42.372 [2024-11-16 16:42:19.621308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.372 [2024-11-16 16:42:19.621374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:42.372 [2024-11-16 16:42:19.629522] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190df988 00:22:42.372 [2024-11-16 16:42:19.630377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:12615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.372 [2024-11-16 16:42:19.630442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:42.372 [2024-11-16 16:42:19.637862] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190f0788 00:22:42.372 [2024-11-16 16:42:19.639437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:6916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.372 [2024-11-16 16:42:19.639470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:42.372 [2024-11-16 16:42:19.649312] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190f3e60 00:22:42.372 [2024-11-16 16:42:19.650382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:8717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.372 [2024-11-16 16:42:19.650419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:42.373 [2024-11-16 16:42:19.656137] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190fef90 00:22:42.373 [2024-11-16 16:42:19.656438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.373 [2024-11-16 16:42:19.656473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:42.373 [2024-11-16 16:42:19.667456] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190fda78 00:22:42.373 [2024-11-16 16:42:19.668150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.373 [2024-11-16 16:42:19.668199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.373 [2024-11-16 16:42:19.675079] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190e4140 00:22:42.373 [2024-11-16 16:42:19.676461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:22627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.373 [2024-11-16 16:42:19.676511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.373 [2024-11-16 16:42:19.683534] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190f5378 00:22:42.373 [2024-11-16 16:42:19.684688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:14714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.373 [2024-11-16 16:42:19.684736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:42.373 [2024-11-16 16:42:19.692728] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190f57b0 00:22:42.373 [2024-11-16 16:42:19.693256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:9963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.373 [2024-11-16 16:42:19.693291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:42.373 [2024-11-16 16:42:19.701553] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190f0ff8 00:22:42.373 [2024-11-16 16:42:19.702046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.373 [2024-11-16 16:42:19.702092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:42.373 [2024-11-16 16:42:19.709345] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190eaab8 00:22:42.373 [2024-11-16 16:42:19.709426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:16608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.373 [2024-11-16 16:42:19.709446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:42.373 [2024-11-16 16:42:19.720563] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190e95a0 00:22:42.373 [2024-11-16 16:42:19.721170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:11637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.373 [2024-11-16 16:42:19.721241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:42.373 [2024-11-16 16:42:19.729570] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190e1710 00:22:42.373 [2024-11-16 16:42:19.730619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:7955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.373 [2024-11-16 16:42:19.730649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:42.373 [2024-11-16 16:42:19.738544] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190e4140 00:22:42.373 [2024-11-16 16:42:19.739934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:25555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.373 [2024-11-16 16:42:19.739966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:42.373 [2024-11-16 16:42:19.747733] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190e73e0 00:22:42.373 [2024-11-16 16:42:19.748338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.373 [2024-11-16 16:42:19.748385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:42.373 [2024-11-16 16:42:19.755388] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190e6738 00:22:42.373 [2024-11-16 16:42:19.756544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.373 [2024-11-16 16:42:19.756575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:42.373 [2024-11-16 16:42:19.763782] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190e8d30 00:22:42.373 [2024-11-16 16:42:19.764833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.373 [2024-11-16 16:42:19.764866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:42.373 [2024-11-16 16:42:19.773568] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190f2510 00:22:42.373 [2024-11-16 16:42:19.774306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:3219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.373 [2024-11-16 16:42:19.774337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:42.373 [2024-11-16 16:42:19.781526] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190f9b30 00:22:42.373 [2024-11-16 16:42:19.781654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:25533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.373 [2024-11-16 16:42:19.781673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:42.373 [2024-11-16 16:42:19.790437] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190f4f40 00:22:42.373 [2024-11-16 16:42:19.791644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.373 [2024-11-16 16:42:19.791693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:42.373 [2024-11-16 16:42:19.799047] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190e27f0 00:22:42.373 [2024-11-16 16:42:19.799928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:25403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.373 [2024-11-16 16:42:19.799958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:42.373 [2024-11-16 16:42:19.808169] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190e6738 00:22:42.373 [2024-11-16 16:42:19.808378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.373 [2024-11-16 16:42:19.808397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:42.373 [2024-11-16 16:42:19.817021] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190ebfd0 00:22:42.373 [2024-11-16 16:42:19.817434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:14493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.373 [2024-11-16 16:42:19.817469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:42.373 [2024-11-16 16:42:19.826014] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190de470 00:22:42.373 [2024-11-16 16:42:19.826405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.373 [2024-11-16 16:42:19.826439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:42.373 [2024-11-16 16:42:19.834799] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190f9b30 00:22:42.373 [2024-11-16 16:42:19.835156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.373 [2024-11-16 16:42:19.835192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:42.373 [2024-11-16 16:42:19.843420] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190f2510 00:22:42.373 [2024-11-16 16:42:19.844094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:6088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.373 [2024-11-16 16:42:19.844169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:42.373 [2024-11-16 16:42:19.852402] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190e8d30 00:22:42.373 [2024-11-16 16:42:19.852995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:13720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.373 [2024-11-16 16:42:19.853032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:42.373 [2024-11-16 16:42:19.861083] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190eb760 00:22:42.633 [2024-11-16 16:42:19.862098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:10564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.633 [2024-11-16 16:42:19.862148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.633 [2024-11-16 16:42:19.871153] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190e6738 00:22:42.633 [2024-11-16 16:42:19.872595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.633 [2024-11-16 16:42:19.872630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.633 [2024-11-16 16:42:19.879894] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190e8d30 00:22:42.633 [2024-11-16 16:42:19.881168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:20869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.633 [2024-11-16 16:42:19.881242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.633 [2024-11-16 16:42:19.888979] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190e12d8 00:22:42.633 [2024-11-16 16:42:19.889673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.633 [2024-11-16 16:42:19.889721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:42.633 [2024-11-16 16:42:19.896628] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190dfdc0 00:22:42.633 [2024-11-16 16:42:19.897639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:24233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.633 [2024-11-16 16:42:19.897670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:42.633 [2024-11-16 16:42:19.905320] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190dfdc0 00:22:42.633 [2024-11-16 16:42:19.906544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:20436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.633 [2024-11-16 16:42:19.906575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:42.633 [2024-11-16 16:42:19.914294] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190f0350 00:22:42.633 [2024-11-16 16:42:19.914686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.633 [2024-11-16 16:42:19.914720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:42.633 [2024-11-16 16:42:19.923939] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190e6b70 00:22:42.633 [2024-11-16 16:42:19.924472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:7020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.633 [2024-11-16 16:42:19.924506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:42.633 [2024-11-16 16:42:19.932664] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190f2d80 00:22:42.633 [2024-11-16 16:42:19.933203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:4707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.633 [2024-11-16 16:42:19.933236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:42.633 [2024-11-16 16:42:19.941520] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190e6fa8 00:22:42.633 [2024-11-16 16:42:19.942223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:14562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.633 [2024-11-16 16:42:19.942283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:42.633 [2024-11-16 16:42:19.950311] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190f8618 00:22:42.633 [2024-11-16 16:42:19.950977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:3037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.633 [2024-11-16 16:42:19.951040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:42.633 [2024-11-16 16:42:19.959050] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190e6300 00:22:42.633 [2024-11-16 16:42:19.959667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:8260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.633 [2024-11-16 16:42:19.959731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:42.633 [2024-11-16 16:42:19.967781] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190de038 00:22:42.633 [2024-11-16 16:42:19.968375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.633 [2024-11-16 16:42:19.968411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:42.633 [2024-11-16 16:42:19.976546] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190e5ec8 00:22:42.634 [2024-11-16 16:42:19.977118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:24750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.634 [2024-11-16 16:42:19.977151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:42.634 [2024-11-16 16:42:19.985306] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190e3d08 00:22:42.634 [2024-11-16 16:42:19.985912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:18142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.634 [2024-11-16 16:42:19.985976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:42.634 [2024-11-16 16:42:19.993422] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190f6cc8 00:22:42.634 [2024-11-16 16:42:19.994320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:14645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.634 [2024-11-16 16:42:19.994350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:42.634 [2024-11-16 16:42:20.002356] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190efae0 00:22:42.634 [2024-11-16 16:42:20.002950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.634 [2024-11-16 16:42:20.003010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:42.634 [2024-11-16 16:42:20.012658] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190efae0 00:22:42.634 [2024-11-16 16:42:20.013878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:15577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.634 [2024-11-16 16:42:20.013927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:42.634 [2024-11-16 16:42:20.022494] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190e0a68 00:22:42.634 [2024-11-16 16:42:20.023529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:3502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.634 [2024-11-16 16:42:20.023575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.634 [2024-11-16 16:42:20.033345] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190e0a68 00:22:42.634 [2024-11-16 16:42:20.034313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.634 [2024-11-16 16:42:20.034360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:42.634 [2024-11-16 16:42:20.041996] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190eea00 00:22:42.634 [2024-11-16 16:42:20.042105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:16373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.634 [2024-11-16 16:42:20.042127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:42.634 [2024-11-16 16:42:20.051095] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190ebb98 00:22:42.634 [2024-11-16 16:42:20.051625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:20915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.634 [2024-11-16 16:42:20.051659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:42.634 [2024-11-16 16:42:20.062068] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190e23b8 00:22:42.634 [2024-11-16 16:42:20.062734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.634 [2024-11-16 16:42:20.062797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.634 [2024-11-16 16:42:20.070544] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190f96f8 00:22:42.634 [2024-11-16 16:42:20.071762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:8688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.634 [2024-11-16 16:42:20.071801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.634 [2024-11-16 16:42:20.080981] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190f6020 00:22:42.634 [2024-11-16 16:42:20.081987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:5470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.634 [2024-11-16 16:42:20.082018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.634 [2024-11-16 16:42:20.089644] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190ed0b0 00:22:42.634 [2024-11-16 16:42:20.090827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.634 [2024-11-16 16:42:20.090859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:42.634 [2024-11-16 16:42:20.098660] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190dfdc0 00:22:42.634 [2024-11-16 16:42:20.099271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:9109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.634 [2024-11-16 16:42:20.099305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:42.634 [2024-11-16 16:42:20.107603] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190fa3a0 00:22:42.634 [2024-11-16 16:42:20.108854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:24375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.634 [2024-11-16 16:42:20.108885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.634 [2024-11-16 16:42:20.116458] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190e38d0 00:22:42.634 [2024-11-16 16:42:20.116890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:59 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.634 [2024-11-16 16:42:20.116925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.893 [2024-11-16 16:42:20.125164] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190f0350 00:22:42.893 [2024-11-16 16:42:20.125602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:11293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.893 [2024-11-16 16:42:20.125643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:42.893 [2024-11-16 16:42:20.134040] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190e88f8 00:22:42.893 [2024-11-16 16:42:20.134645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:4755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.893 [2024-11-16 16:42:20.134740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:42.893 [2024-11-16 16:42:20.141801] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190f20d8 00:22:42.893 [2024-11-16 16:42:20.141980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:25495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.893 [2024-11-16 16:42:20.142000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:42.893 [2024-11-16 16:42:20.152688] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190f9b30 00:22:42.893 [2024-11-16 16:42:20.153279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:4593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.893 [2024-11-16 16:42:20.153344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:42.893 [2024-11-16 16:42:20.160255] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190e1b48 00:22:42.893 [2024-11-16 16:42:20.161412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.893 [2024-11-16 16:42:20.161461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:42.893 [2024-11-16 16:42:20.169265] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190e6fa8 00:22:42.893 [2024-11-16 16:42:20.169600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:3373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.893 [2024-11-16 16:42:20.169630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:42.893 [2024-11-16 16:42:20.178972] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190e7818 00:22:42.893 [2024-11-16 16:42:20.179431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:23812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.893 [2024-11-16 16:42:20.179466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:42.893 [2024-11-16 16:42:20.187847] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190df550 00:22:42.893 [2024-11-16 16:42:20.188475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.893 [2024-11-16 16:42:20.188553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:42.893 [2024-11-16 16:42:20.196575] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190f7538 00:22:42.893 [2024-11-16 16:42:20.197175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:2089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.893 [2024-11-16 16:42:20.197215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:42.893 [2024-11-16 16:42:20.205313] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190fda78 00:22:42.893 [2024-11-16 16:42:20.205867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:20944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.893 [2024-11-16 16:42:20.205901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:42.893 [2024-11-16 16:42:20.214047] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190e6fa8 00:22:42.893 [2024-11-16 16:42:20.214587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:24359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.893 [2024-11-16 16:42:20.214636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:42.893 [2024-11-16 16:42:20.222798] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190f1ca0 00:22:42.893 [2024-11-16 16:42:20.223315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:22372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.893 [2024-11-16 16:42:20.223350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:42.893 [2024-11-16 16:42:20.231531] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190ef270 00:22:42.894 [2024-11-16 16:42:20.232065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:11041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.894 [2024-11-16 16:42:20.232123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:42.894 [2024-11-16 16:42:20.241589] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190e1710 00:22:42.894 [2024-11-16 16:42:20.242606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.894 [2024-11-16 16:42:20.242652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:42.894 [2024-11-16 16:42:20.248609] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190e12d8 00:22:42.894 [2024-11-16 16:42:20.250015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:10584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.894 [2024-11-16 16:42:20.250046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:42.894 [2024-11-16 16:42:20.257673] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e0e0) with pdu=0x2000190dfdc0 00:22:42.894 [2024-11-16 16:42:20.258464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.894 [2024-11-16 16:42:20.258526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:42.894 00:22:42.894 Latency(us) 00:22:42.894 [2024-11-16T16:42:20.385Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:42.894 [2024-11-16T16:42:20.385Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:42.894 nvme0n1 : 2.00 28397.49 110.93 0.00 0.00 4503.14 1876.71 12332.68 00:22:42.894 [2024-11-16T16:42:20.385Z] =================================================================================================================== 00:22:42.894 [2024-11-16T16:42:20.385Z] Total : 28397.49 110.93 0.00 0.00 4503.14 1876.71 12332.68 00:22:42.894 0 00:22:42.894 16:42:20 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:42.894 16:42:20 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:42.894 16:42:20 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:42.894 | .driver_specific 00:22:42.894 | .nvme_error 00:22:42.894 | .status_code 00:22:42.894 | .command_transient_transport_error' 00:22:42.894 16:42:20 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:43.151 16:42:20 -- host/digest.sh@71 -- # (( 222 > 0 )) 00:22:43.151 16:42:20 -- host/digest.sh@73 -- # killprocess 98116 00:22:43.151 16:42:20 -- common/autotest_common.sh@936 -- # '[' -z 98116 ']' 00:22:43.151 16:42:20 -- common/autotest_common.sh@940 -- # kill -0 98116 00:22:43.151 16:42:20 -- common/autotest_common.sh@941 -- # uname 00:22:43.151 16:42:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:43.151 16:42:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 98116 00:22:43.151 killing process with pid 98116 00:22:43.151 Received shutdown signal, test time was about 2.000000 seconds 00:22:43.151 00:22:43.151 Latency(us) 00:22:43.151 [2024-11-16T16:42:20.642Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:43.151 [2024-11-16T16:42:20.642Z] =================================================================================================================== 00:22:43.151 [2024-11-16T16:42:20.642Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:43.151 16:42:20 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:43.151 16:42:20 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:43.151 16:42:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 98116' 00:22:43.151 16:42:20 -- common/autotest_common.sh@955 -- # kill 98116 00:22:43.151 16:42:20 -- common/autotest_common.sh@960 -- # wait 98116 00:22:43.407 16:42:20 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:22:43.407 16:42:20 -- host/digest.sh@54 -- # local rw bs qd 00:22:43.407 16:42:20 -- host/digest.sh@56 -- # rw=randwrite 00:22:43.407 16:42:20 -- host/digest.sh@56 -- # bs=131072 00:22:43.407 16:42:20 -- host/digest.sh@56 -- # qd=16 00:22:43.407 16:42:20 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:22:43.407 16:42:20 -- host/digest.sh@58 -- # bperfpid=98205 00:22:43.407 16:42:20 -- host/digest.sh@60 -- # waitforlisten 98205 /var/tmp/bperf.sock 00:22:43.407 16:42:20 -- common/autotest_common.sh@829 -- # '[' -z 98205 ']' 00:22:43.407 16:42:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:43.407 16:42:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:43.407 16:42:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:43.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:43.407 16:42:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:43.407 16:42:20 -- common/autotest_common.sh@10 -- # set +x 00:22:43.407 [2024-11-16 16:42:20.826005] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:43.407 [2024-11-16 16:42:20.826126] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98205 ] 00:22:43.407 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:43.407 Zero copy mechanism will not be used. 00:22:43.664 [2024-11-16 16:42:20.958095] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:43.664 [2024-11-16 16:42:21.016511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:44.598 16:42:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:44.598 16:42:21 -- common/autotest_common.sh@862 -- # return 0 00:22:44.598 16:42:21 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:44.598 16:42:21 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:44.856 16:42:22 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:44.856 16:42:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.856 16:42:22 -- common/autotest_common.sh@10 -- # set +x 00:22:44.856 16:42:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.856 16:42:22 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:44.856 16:42:22 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:45.114 nvme0n1 00:22:45.114 16:42:22 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:22:45.114 16:42:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.114 16:42:22 -- common/autotest_common.sh@10 -- # set +x 00:22:45.114 16:42:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.115 16:42:22 -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:45.115 16:42:22 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:45.115 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:45.115 Zero copy mechanism will not be used. 00:22:45.115 Running I/O for 2 seconds... 00:22:45.115 [2024-11-16 16:42:22.581512] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.115 [2024-11-16 16:42:22.581947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.115 [2024-11-16 16:42:22.581988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.115 [2024-11-16 16:42:22.585606] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.115 [2024-11-16 16:42:22.585773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.115 [2024-11-16 16:42:22.585796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.115 [2024-11-16 16:42:22.589488] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.115 [2024-11-16 16:42:22.589629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.115 [2024-11-16 16:42:22.589652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.115 [2024-11-16 16:42:22.593702] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.115 [2024-11-16 16:42:22.593807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.115 [2024-11-16 16:42:22.593829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.115 [2024-11-16 16:42:22.597652] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.115 [2024-11-16 16:42:22.597762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.115 [2024-11-16 16:42:22.597785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.115 [2024-11-16 16:42:22.601623] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.115 [2024-11-16 16:42:22.601734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.115 [2024-11-16 16:42:22.601760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.375 [2024-11-16 16:42:22.605677] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.375 [2024-11-16 16:42:22.605853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.375 [2024-11-16 16:42:22.605877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.375 [2024-11-16 16:42:22.609544] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.375 [2024-11-16 16:42:22.609668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.375 [2024-11-16 16:42:22.609691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.375 [2024-11-16 16:42:22.613575] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.375 [2024-11-16 16:42:22.613811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.375 [2024-11-16 16:42:22.613832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.375 [2024-11-16 16:42:22.617543] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.375 [2024-11-16 16:42:22.617755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.375 [2024-11-16 16:42:22.617777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.375 [2024-11-16 16:42:22.621518] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.375 [2024-11-16 16:42:22.621692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.375 [2024-11-16 16:42:22.621713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.375 [2024-11-16 16:42:22.625447] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.375 [2024-11-16 16:42:22.625577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.375 [2024-11-16 16:42:22.625602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.375 [2024-11-16 16:42:22.629317] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.375 [2024-11-16 16:42:22.629412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.375 [2024-11-16 16:42:22.629434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.375 [2024-11-16 16:42:22.633204] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.375 [2024-11-16 16:42:22.633388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.375 [2024-11-16 16:42:22.633410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.375 [2024-11-16 16:42:22.637122] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.375 [2024-11-16 16:42:22.637438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.375 [2024-11-16 16:42:22.637477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.375 [2024-11-16 16:42:22.640958] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.375 [2024-11-16 16:42:22.641169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.375 [2024-11-16 16:42:22.641232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.375 [2024-11-16 16:42:22.645007] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.375 [2024-11-16 16:42:22.645222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.375 [2024-11-16 16:42:22.645244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.375 [2024-11-16 16:42:22.648904] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.375 [2024-11-16 16:42:22.648998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.375 [2024-11-16 16:42:22.649021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.375 [2024-11-16 16:42:22.652871] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.375 [2024-11-16 16:42:22.653042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.375 [2024-11-16 16:42:22.653063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.375 [2024-11-16 16:42:22.656783] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.375 [2024-11-16 16:42:22.656951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.375 [2024-11-16 16:42:22.656972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.375 [2024-11-16 16:42:22.660663] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.375 [2024-11-16 16:42:22.660755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.375 [2024-11-16 16:42:22.660777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.375 [2024-11-16 16:42:22.664613] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.375 [2024-11-16 16:42:22.664776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.375 [2024-11-16 16:42:22.664797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.375 [2024-11-16 16:42:22.668559] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.375 [2024-11-16 16:42:22.668801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.375 [2024-11-16 16:42:22.668839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.376 [2024-11-16 16:42:22.672536] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.376 [2024-11-16 16:42:22.672755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.376 [2024-11-16 16:42:22.672776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.376 [2024-11-16 16:42:22.676495] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.376 [2024-11-16 16:42:22.676665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.376 [2024-11-16 16:42:22.676686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.376 [2024-11-16 16:42:22.680438] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.376 [2024-11-16 16:42:22.680543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.376 [2024-11-16 16:42:22.680565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.376 [2024-11-16 16:42:22.684370] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.376 [2024-11-16 16:42:22.684515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.376 [2024-11-16 16:42:22.684536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.376 [2024-11-16 16:42:22.688225] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.376 [2024-11-16 16:42:22.688430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.376 [2024-11-16 16:42:22.688451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.376 [2024-11-16 16:42:22.692253] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.376 [2024-11-16 16:42:22.692360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.376 [2024-11-16 16:42:22.692382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.376 [2024-11-16 16:42:22.696345] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.376 [2024-11-16 16:42:22.696521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.376 [2024-11-16 16:42:22.696542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.376 [2024-11-16 16:42:22.700278] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.376 [2024-11-16 16:42:22.700500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.376 [2024-11-16 16:42:22.700521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.376 [2024-11-16 16:42:22.704287] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.376 [2024-11-16 16:42:22.704400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.376 [2024-11-16 16:42:22.704452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.376 [2024-11-16 16:42:22.708472] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.376 [2024-11-16 16:42:22.708681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.376 [2024-11-16 16:42:22.708702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.376 [2024-11-16 16:42:22.712536] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.376 [2024-11-16 16:42:22.712659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.376 [2024-11-16 16:42:22.712680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.376 [2024-11-16 16:42:22.716561] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.376 [2024-11-16 16:42:22.716655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.376 [2024-11-16 16:42:22.716676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.376 [2024-11-16 16:42:22.720608] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.376 [2024-11-16 16:42:22.720709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.376 [2024-11-16 16:42:22.720731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.376 [2024-11-16 16:42:22.724585] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.376 [2024-11-16 16:42:22.724739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.376 [2024-11-16 16:42:22.724760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.376 [2024-11-16 16:42:22.728672] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.376 [2024-11-16 16:42:22.728842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.376 [2024-11-16 16:42:22.728864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.376 [2024-11-16 16:42:22.732725] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.376 [2024-11-16 16:42:22.733080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.376 [2024-11-16 16:42:22.733126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.376 [2024-11-16 16:42:22.736637] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.376 [2024-11-16 16:42:22.736742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.376 [2024-11-16 16:42:22.736764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.376 [2024-11-16 16:42:22.740641] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.376 [2024-11-16 16:42:22.740727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.376 [2024-11-16 16:42:22.740749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.376 [2024-11-16 16:42:22.744665] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.376 [2024-11-16 16:42:22.744787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.376 [2024-11-16 16:42:22.744808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.376 [2024-11-16 16:42:22.748645] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.376 [2024-11-16 16:42:22.748766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.376 [2024-11-16 16:42:22.748787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.376 [2024-11-16 16:42:22.752896] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.376 [2024-11-16 16:42:22.753066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.376 [2024-11-16 16:42:22.753101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.376 [2024-11-16 16:42:22.756940] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.376 [2024-11-16 16:42:22.757164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.376 [2024-11-16 16:42:22.757211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.376 [2024-11-16 16:42:22.760881] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.376 [2024-11-16 16:42:22.761095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.376 [2024-11-16 16:42:22.761116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.376 [2024-11-16 16:42:22.764849] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.376 [2024-11-16 16:42:22.765002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.376 [2024-11-16 16:42:22.765022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.376 [2024-11-16 16:42:22.768802] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.376 [2024-11-16 16:42:22.768910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.376 [2024-11-16 16:42:22.768931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.376 [2024-11-16 16:42:22.772767] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.376 [2024-11-16 16:42:22.772908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.376 [2024-11-16 16:42:22.772929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.376 [2024-11-16 16:42:22.776716] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.376 [2024-11-16 16:42:22.776843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.376 [2024-11-16 16:42:22.776864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.377 [2024-11-16 16:42:22.780673] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.377 [2024-11-16 16:42:22.780798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.377 [2024-11-16 16:42:22.780819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.377 [2024-11-16 16:42:22.784607] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.377 [2024-11-16 16:42:22.784775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.377 [2024-11-16 16:42:22.784796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.377 [2024-11-16 16:42:22.788538] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.377 [2024-11-16 16:42:22.788820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.377 [2024-11-16 16:42:22.788874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.377 [2024-11-16 16:42:22.792424] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.377 [2024-11-16 16:42:22.792551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.377 [2024-11-16 16:42:22.792573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.377 [2024-11-16 16:42:22.796402] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.377 [2024-11-16 16:42:22.796587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.377 [2024-11-16 16:42:22.796616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.377 [2024-11-16 16:42:22.800278] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.377 [2024-11-16 16:42:22.800373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.377 [2024-11-16 16:42:22.800394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.377 [2024-11-16 16:42:22.804267] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.377 [2024-11-16 16:42:22.804416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.377 [2024-11-16 16:42:22.804436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.377 [2024-11-16 16:42:22.808166] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.377 [2024-11-16 16:42:22.808287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.377 [2024-11-16 16:42:22.808307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.377 [2024-11-16 16:42:22.812026] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.377 [2024-11-16 16:42:22.812136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.377 [2024-11-16 16:42:22.812159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.377 [2024-11-16 16:42:22.815954] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.377 [2024-11-16 16:42:22.816132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.377 [2024-11-16 16:42:22.816152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.377 [2024-11-16 16:42:22.819868] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.377 [2024-11-16 16:42:22.820034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.377 [2024-11-16 16:42:22.820055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.377 [2024-11-16 16:42:22.823724] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.377 [2024-11-16 16:42:22.823857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.377 [2024-11-16 16:42:22.823879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.377 [2024-11-16 16:42:22.827770] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.377 [2024-11-16 16:42:22.827962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.377 [2024-11-16 16:42:22.827983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.377 [2024-11-16 16:42:22.831711] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.377 [2024-11-16 16:42:22.831832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.377 [2024-11-16 16:42:22.831854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.377 [2024-11-16 16:42:22.835613] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.377 [2024-11-16 16:42:22.835762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.377 [2024-11-16 16:42:22.835782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.377 [2024-11-16 16:42:22.839533] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.377 [2024-11-16 16:42:22.839650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.377 [2024-11-16 16:42:22.839672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.377 [2024-11-16 16:42:22.843474] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.377 [2024-11-16 16:42:22.843592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.377 [2024-11-16 16:42:22.843613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.377 [2024-11-16 16:42:22.847354] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.377 [2024-11-16 16:42:22.847534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.377 [2024-11-16 16:42:22.847555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.377 [2024-11-16 16:42:22.851410] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.377 [2024-11-16 16:42:22.851652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.377 [2024-11-16 16:42:22.851673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.377 [2024-11-16 16:42:22.855145] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.377 [2024-11-16 16:42:22.855249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.377 [2024-11-16 16:42:22.855271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.377 [2024-11-16 16:42:22.859159] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.377 [2024-11-16 16:42:22.859482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.377 [2024-11-16 16:42:22.859513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.637 [2024-11-16 16:42:22.863140] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.637 [2024-11-16 16:42:22.863291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.637 [2024-11-16 16:42:22.863315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.637 [2024-11-16 16:42:22.867001] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.637 [2024-11-16 16:42:22.867180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.637 [2024-11-16 16:42:22.867202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.637 [2024-11-16 16:42:22.870987] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.637 [2024-11-16 16:42:22.871119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.637 [2024-11-16 16:42:22.871142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.637 [2024-11-16 16:42:22.874908] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.637 [2024-11-16 16:42:22.875015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.637 [2024-11-16 16:42:22.875036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.637 [2024-11-16 16:42:22.878837] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.637 [2024-11-16 16:42:22.879009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.637 [2024-11-16 16:42:22.879030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.637 [2024-11-16 16:42:22.882807] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.638 [2024-11-16 16:42:22.883020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.638 [2024-11-16 16:42:22.883041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.638 [2024-11-16 16:42:22.886789] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.638 [2024-11-16 16:42:22.886991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.638 [2024-11-16 16:42:22.887012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.638 [2024-11-16 16:42:22.890718] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.638 [2024-11-16 16:42:22.890895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.638 [2024-11-16 16:42:22.890915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.638 [2024-11-16 16:42:22.894628] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.638 [2024-11-16 16:42:22.894723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.638 [2024-11-16 16:42:22.894745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.638 [2024-11-16 16:42:22.898689] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.638 [2024-11-16 16:42:22.898845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.638 [2024-11-16 16:42:22.898866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.638 [2024-11-16 16:42:22.902537] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.638 [2024-11-16 16:42:22.902654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.638 [2024-11-16 16:42:22.902675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.638 [2024-11-16 16:42:22.906531] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.638 [2024-11-16 16:42:22.906634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.638 [2024-11-16 16:42:22.906655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.638 [2024-11-16 16:42:22.910479] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.638 [2024-11-16 16:42:22.910648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.638 [2024-11-16 16:42:22.910668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.638 [2024-11-16 16:42:22.914347] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.638 [2024-11-16 16:42:22.914628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.638 [2024-11-16 16:42:22.914675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.638 [2024-11-16 16:42:22.918261] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.638 [2024-11-16 16:42:22.918378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.638 [2024-11-16 16:42:22.918400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.638 [2024-11-16 16:42:22.922382] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.638 [2024-11-16 16:42:22.922526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.638 [2024-11-16 16:42:22.922546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.638 [2024-11-16 16:42:22.926335] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.638 [2024-11-16 16:42:22.926452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.638 [2024-11-16 16:42:22.926473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.638 [2024-11-16 16:42:22.930347] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.638 [2024-11-16 16:42:22.930501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.638 [2024-11-16 16:42:22.930522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.638 [2024-11-16 16:42:22.934322] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.638 [2024-11-16 16:42:22.934430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.638 [2024-11-16 16:42:22.934450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.638 [2024-11-16 16:42:22.938226] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.638 [2024-11-16 16:42:22.938303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.638 [2024-11-16 16:42:22.938325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.638 [2024-11-16 16:42:22.942110] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.638 [2024-11-16 16:42:22.942284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.638 [2024-11-16 16:42:22.942305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.638 [2024-11-16 16:42:22.946024] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.638 [2024-11-16 16:42:22.946268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.638 [2024-11-16 16:42:22.946346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.638 [2024-11-16 16:42:22.949904] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.638 [2024-11-16 16:42:22.950005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.638 [2024-11-16 16:42:22.950026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.638 [2024-11-16 16:42:22.953935] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.638 [2024-11-16 16:42:22.954061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.638 [2024-11-16 16:42:22.954082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.638 [2024-11-16 16:42:22.957865] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.638 [2024-11-16 16:42:22.958015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.638 [2024-11-16 16:42:22.958037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.638 [2024-11-16 16:42:22.961862] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.638 [2024-11-16 16:42:22.962018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.638 [2024-11-16 16:42:22.962038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.638 [2024-11-16 16:42:22.965840] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.638 [2024-11-16 16:42:22.965993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.638 [2024-11-16 16:42:22.966014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.638 [2024-11-16 16:42:22.969817] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.638 [2024-11-16 16:42:22.969904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.638 [2024-11-16 16:42:22.969925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.638 [2024-11-16 16:42:22.973776] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.638 [2024-11-16 16:42:22.973955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.638 [2024-11-16 16:42:22.973976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.638 [2024-11-16 16:42:22.977745] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.638 [2024-11-16 16:42:22.978047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.638 [2024-11-16 16:42:22.978106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.638 [2024-11-16 16:42:22.981688] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.638 [2024-11-16 16:42:22.981884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.638 [2024-11-16 16:42:22.981904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.638 [2024-11-16 16:42:22.985671] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.638 [2024-11-16 16:42:22.985867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.638 [2024-11-16 16:42:22.985888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.638 [2024-11-16 16:42:22.989551] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.639 [2024-11-16 16:42:22.989673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.639 [2024-11-16 16:42:22.989695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.639 [2024-11-16 16:42:22.993496] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.639 [2024-11-16 16:42:22.993699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.639 [2024-11-16 16:42:22.993720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.639 [2024-11-16 16:42:22.997360] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.639 [2024-11-16 16:42:22.997555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.639 [2024-11-16 16:42:22.997575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.639 [2024-11-16 16:42:23.001270] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.639 [2024-11-16 16:42:23.001362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.639 [2024-11-16 16:42:23.001383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.639 [2024-11-16 16:42:23.005160] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.639 [2024-11-16 16:42:23.005358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.639 [2024-11-16 16:42:23.005398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.639 [2024-11-16 16:42:23.009019] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.639 [2024-11-16 16:42:23.009240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.639 [2024-11-16 16:42:23.009263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.639 [2024-11-16 16:42:23.012930] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.639 [2024-11-16 16:42:23.013043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.639 [2024-11-16 16:42:23.013064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.639 [2024-11-16 16:42:23.016955] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.639 [2024-11-16 16:42:23.017130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.639 [2024-11-16 16:42:23.017151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.639 [2024-11-16 16:42:23.020928] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.639 [2024-11-16 16:42:23.021054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.639 [2024-11-16 16:42:23.021088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.639 [2024-11-16 16:42:23.024832] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.639 [2024-11-16 16:42:23.024981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.639 [2024-11-16 16:42:23.025002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.639 [2024-11-16 16:42:23.028650] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.639 [2024-11-16 16:42:23.028767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.639 [2024-11-16 16:42:23.028788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.639 [2024-11-16 16:42:23.032587] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.639 [2024-11-16 16:42:23.032690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.639 [2024-11-16 16:42:23.032711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.639 [2024-11-16 16:42:23.036535] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.639 [2024-11-16 16:42:23.036703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.639 [2024-11-16 16:42:23.036723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.639 [2024-11-16 16:42:23.040463] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.639 [2024-11-16 16:42:23.040793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.639 [2024-11-16 16:42:23.040827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.639 [2024-11-16 16:42:23.044330] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.639 [2024-11-16 16:42:23.044423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.639 [2024-11-16 16:42:23.044443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.639 [2024-11-16 16:42:23.048307] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.639 [2024-11-16 16:42:23.048474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.639 [2024-11-16 16:42:23.048495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.639 [2024-11-16 16:42:23.052181] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.639 [2024-11-16 16:42:23.052479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.639 [2024-11-16 16:42:23.052503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.639 [2024-11-16 16:42:23.056090] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.639 [2024-11-16 16:42:23.056293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.639 [2024-11-16 16:42:23.056344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.639 [2024-11-16 16:42:23.060049] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.639 [2024-11-16 16:42:23.060191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.639 [2024-11-16 16:42:23.060212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.639 [2024-11-16 16:42:23.063867] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.639 [2024-11-16 16:42:23.063962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.639 [2024-11-16 16:42:23.063984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.639 [2024-11-16 16:42:23.067741] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.639 [2024-11-16 16:42:23.067899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.639 [2024-11-16 16:42:23.067920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.639 [2024-11-16 16:42:23.071714] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.639 [2024-11-16 16:42:23.071871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.639 [2024-11-16 16:42:23.071892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.639 [2024-11-16 16:42:23.075568] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.639 [2024-11-16 16:42:23.075671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.639 [2024-11-16 16:42:23.075692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.639 [2024-11-16 16:42:23.079633] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.639 [2024-11-16 16:42:23.079814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.639 [2024-11-16 16:42:23.079835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.639 [2024-11-16 16:42:23.083570] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.639 [2024-11-16 16:42:23.083748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.639 [2024-11-16 16:42:23.083769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.639 [2024-11-16 16:42:23.087433] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.639 [2024-11-16 16:42:23.087542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.639 [2024-11-16 16:42:23.087563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.639 [2024-11-16 16:42:23.091452] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.639 [2024-11-16 16:42:23.091621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.639 [2024-11-16 16:42:23.091642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.639 [2024-11-16 16:42:23.095366] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.640 [2024-11-16 16:42:23.095462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.640 [2024-11-16 16:42:23.095482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.640 [2024-11-16 16:42:23.099345] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.640 [2024-11-16 16:42:23.099526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.640 [2024-11-16 16:42:23.099547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.640 [2024-11-16 16:42:23.103314] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.640 [2024-11-16 16:42:23.103438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.640 [2024-11-16 16:42:23.103459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.640 [2024-11-16 16:42:23.107193] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.640 [2024-11-16 16:42:23.107289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.640 [2024-11-16 16:42:23.107310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.640 [2024-11-16 16:42:23.111236] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.640 [2024-11-16 16:42:23.111405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.640 [2024-11-16 16:42:23.111426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.640 [2024-11-16 16:42:23.115097] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.640 [2024-11-16 16:42:23.115289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.640 [2024-11-16 16:42:23.115341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.640 [2024-11-16 16:42:23.119046] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.640 [2024-11-16 16:42:23.119243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.640 [2024-11-16 16:42:23.119264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.640 [2024-11-16 16:42:23.123112] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.640 [2024-11-16 16:42:23.123296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.640 [2024-11-16 16:42:23.123342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.899 [2024-11-16 16:42:23.127023] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.899 [2024-11-16 16:42:23.127168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.899 [2024-11-16 16:42:23.127191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.899 [2024-11-16 16:42:23.131022] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.899 [2024-11-16 16:42:23.131249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.899 [2024-11-16 16:42:23.131271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.899 [2024-11-16 16:42:23.135009] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.899 [2024-11-16 16:42:23.135115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.899 [2024-11-16 16:42:23.135138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.899 [2024-11-16 16:42:23.138890] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.899 [2024-11-16 16:42:23.139010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.900 [2024-11-16 16:42:23.139032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.900 [2024-11-16 16:42:23.142842] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.900 [2024-11-16 16:42:23.143004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.900 [2024-11-16 16:42:23.143025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.900 [2024-11-16 16:42:23.146721] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.900 [2024-11-16 16:42:23.146900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.900 [2024-11-16 16:42:23.146920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.900 [2024-11-16 16:42:23.150657] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.900 [2024-11-16 16:42:23.150780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.900 [2024-11-16 16:42:23.150801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.900 [2024-11-16 16:42:23.154729] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.900 [2024-11-16 16:42:23.154877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.900 [2024-11-16 16:42:23.154897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.900 [2024-11-16 16:42:23.158596] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.900 [2024-11-16 16:42:23.158774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.900 [2024-11-16 16:42:23.158795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.900 [2024-11-16 16:42:23.162622] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.900 [2024-11-16 16:42:23.162777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.900 [2024-11-16 16:42:23.162798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.900 [2024-11-16 16:42:23.166517] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.900 [2024-11-16 16:42:23.166649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.900 [2024-11-16 16:42:23.166670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.900 [2024-11-16 16:42:23.170434] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.900 [2024-11-16 16:42:23.170535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.900 [2024-11-16 16:42:23.170557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.900 [2024-11-16 16:42:23.174450] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.900 [2024-11-16 16:42:23.174634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.900 [2024-11-16 16:42:23.174655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.900 [2024-11-16 16:42:23.178415] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.900 [2024-11-16 16:42:23.178783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.900 [2024-11-16 16:42:23.178826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.900 [2024-11-16 16:42:23.182459] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.900 [2024-11-16 16:42:23.182763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.900 [2024-11-16 16:42:23.182799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.900 [2024-11-16 16:42:23.186490] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.900 [2024-11-16 16:42:23.186629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.900 [2024-11-16 16:42:23.186651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.900 [2024-11-16 16:42:23.190602] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.900 [2024-11-16 16:42:23.190713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.900 [2024-11-16 16:42:23.190735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.900 [2024-11-16 16:42:23.194660] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.900 [2024-11-16 16:42:23.194778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.900 [2024-11-16 16:42:23.194799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.900 [2024-11-16 16:42:23.198719] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.900 [2024-11-16 16:42:23.198845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.900 [2024-11-16 16:42:23.198866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.900 [2024-11-16 16:42:23.202627] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.900 [2024-11-16 16:42:23.202768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.900 [2024-11-16 16:42:23.202791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.900 [2024-11-16 16:42:23.206646] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.900 [2024-11-16 16:42:23.206845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.900 [2024-11-16 16:42:23.206866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.900 [2024-11-16 16:42:23.210687] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.900 [2024-11-16 16:42:23.210866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.900 [2024-11-16 16:42:23.210887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.900 [2024-11-16 16:42:23.214527] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.900 [2024-11-16 16:42:23.214632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.900 [2024-11-16 16:42:23.214653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.900 [2024-11-16 16:42:23.218583] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.900 [2024-11-16 16:42:23.218736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.900 [2024-11-16 16:42:23.218756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.900 [2024-11-16 16:42:23.222543] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.900 [2024-11-16 16:42:23.222646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.900 [2024-11-16 16:42:23.222668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.900 [2024-11-16 16:42:23.226520] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.900 [2024-11-16 16:42:23.226717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.900 [2024-11-16 16:42:23.226738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.900 [2024-11-16 16:42:23.230413] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.900 [2024-11-16 16:42:23.230547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.901 [2024-11-16 16:42:23.230568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.901 [2024-11-16 16:42:23.234280] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.901 [2024-11-16 16:42:23.234403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.901 [2024-11-16 16:42:23.234424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.901 [2024-11-16 16:42:23.238185] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.901 [2024-11-16 16:42:23.238353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.901 [2024-11-16 16:42:23.238375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.901 [2024-11-16 16:42:23.242163] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.901 [2024-11-16 16:42:23.242466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.901 [2024-11-16 16:42:23.242494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.901 [2024-11-16 16:42:23.246096] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.901 [2024-11-16 16:42:23.246219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.901 [2024-11-16 16:42:23.246241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.901 [2024-11-16 16:42:23.250182] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.901 [2024-11-16 16:42:23.250458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.901 [2024-11-16 16:42:23.250482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.901 [2024-11-16 16:42:23.254110] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.901 [2024-11-16 16:42:23.254245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.901 [2024-11-16 16:42:23.254266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.901 [2024-11-16 16:42:23.257959] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.901 [2024-11-16 16:42:23.258146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.901 [2024-11-16 16:42:23.258179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.901 [2024-11-16 16:42:23.261876] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.901 [2024-11-16 16:42:23.262011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.901 [2024-11-16 16:42:23.262031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.901 [2024-11-16 16:42:23.265765] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.901 [2024-11-16 16:42:23.265868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.901 [2024-11-16 16:42:23.265891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.901 [2024-11-16 16:42:23.269685] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.901 [2024-11-16 16:42:23.269857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.901 [2024-11-16 16:42:23.269879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.901 [2024-11-16 16:42:23.273676] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.901 [2024-11-16 16:42:23.273974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.901 [2024-11-16 16:42:23.274032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.901 [2024-11-16 16:42:23.277731] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.901 [2024-11-16 16:42:23.277846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.901 [2024-11-16 16:42:23.277867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.901 [2024-11-16 16:42:23.281912] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.901 [2024-11-16 16:42:23.282094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.901 [2024-11-16 16:42:23.282128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.901 [2024-11-16 16:42:23.285971] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.901 [2024-11-16 16:42:23.286296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.901 [2024-11-16 16:42:23.286332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.901 [2024-11-16 16:42:23.289846] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.901 [2024-11-16 16:42:23.289970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.901 [2024-11-16 16:42:23.289991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.901 [2024-11-16 16:42:23.293795] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.901 [2024-11-16 16:42:23.293966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.901 [2024-11-16 16:42:23.293987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.901 [2024-11-16 16:42:23.297701] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.901 [2024-11-16 16:42:23.298004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.901 [2024-11-16 16:42:23.298039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.901 [2024-11-16 16:42:23.301555] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.901 [2024-11-16 16:42:23.301756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.901 [2024-11-16 16:42:23.301777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.901 [2024-11-16 16:42:23.305589] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.901 [2024-11-16 16:42:23.305779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.901 [2024-11-16 16:42:23.305800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.901 [2024-11-16 16:42:23.309574] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.901 [2024-11-16 16:42:23.309809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.901 [2024-11-16 16:42:23.309830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.901 [2024-11-16 16:42:23.313424] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.901 [2024-11-16 16:42:23.313580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.901 [2024-11-16 16:42:23.313601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.901 [2024-11-16 16:42:23.317383] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.901 [2024-11-16 16:42:23.317522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.901 [2024-11-16 16:42:23.317544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.901 [2024-11-16 16:42:23.321254] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.901 [2024-11-16 16:42:23.321370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.901 [2024-11-16 16:42:23.321393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.901 [2024-11-16 16:42:23.325124] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.901 [2024-11-16 16:42:23.325319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.901 [2024-11-16 16:42:23.325342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.901 [2024-11-16 16:42:23.328942] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.901 [2024-11-16 16:42:23.329105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.901 [2024-11-16 16:42:23.329127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.901 [2024-11-16 16:42:23.332845] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.901 [2024-11-16 16:42:23.332958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.901 [2024-11-16 16:42:23.332982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.902 [2024-11-16 16:42:23.336777] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.902 [2024-11-16 16:42:23.336973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.902 [2024-11-16 16:42:23.336994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.902 [2024-11-16 16:42:23.340730] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.902 [2024-11-16 16:42:23.340983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.902 [2024-11-16 16:42:23.341046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.902 [2024-11-16 16:42:23.344610] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.902 [2024-11-16 16:42:23.344745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.902 [2024-11-16 16:42:23.344765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.902 [2024-11-16 16:42:23.348556] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.902 [2024-11-16 16:42:23.348695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.902 [2024-11-16 16:42:23.348715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.902 [2024-11-16 16:42:23.352518] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.902 [2024-11-16 16:42:23.352666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.902 [2024-11-16 16:42:23.352687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.902 [2024-11-16 16:42:23.356487] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.902 [2024-11-16 16:42:23.356667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.902 [2024-11-16 16:42:23.356687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.902 [2024-11-16 16:42:23.360398] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.902 [2024-11-16 16:42:23.360528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.902 [2024-11-16 16:42:23.360549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.902 [2024-11-16 16:42:23.364263] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.902 [2024-11-16 16:42:23.364425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.902 [2024-11-16 16:42:23.364446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.902 [2024-11-16 16:42:23.368240] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.902 [2024-11-16 16:42:23.368411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.902 [2024-11-16 16:42:23.368432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.902 [2024-11-16 16:42:23.372270] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.902 [2024-11-16 16:42:23.372558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.902 [2024-11-16 16:42:23.372600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.902 [2024-11-16 16:42:23.376065] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.902 [2024-11-16 16:42:23.376191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.902 [2024-11-16 16:42:23.376214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.902 [2024-11-16 16:42:23.380014] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.902 [2024-11-16 16:42:23.380204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.902 [2024-11-16 16:42:23.380225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.902 [2024-11-16 16:42:23.383925] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:45.902 [2024-11-16 16:42:23.384298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.902 [2024-11-16 16:42:23.384356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.161 [2024-11-16 16:42:23.387928] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.161 [2024-11-16 16:42:23.388042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.161 [2024-11-16 16:42:23.388066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.161 [2024-11-16 16:42:23.391952] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.161 [2024-11-16 16:42:23.392135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.161 [2024-11-16 16:42:23.392158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.162 [2024-11-16 16:42:23.395952] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.162 [2024-11-16 16:42:23.396145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.162 [2024-11-16 16:42:23.396169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.162 [2024-11-16 16:42:23.399854] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.162 [2024-11-16 16:42:23.399989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.162 [2024-11-16 16:42:23.400011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.162 [2024-11-16 16:42:23.403851] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.162 [2024-11-16 16:42:23.403983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.162 [2024-11-16 16:42:23.404004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.162 [2024-11-16 16:42:23.407698] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.162 [2024-11-16 16:42:23.407813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.162 [2024-11-16 16:42:23.407835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.162 [2024-11-16 16:42:23.411714] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.162 [2024-11-16 16:42:23.411863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.162 [2024-11-16 16:42:23.411884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.162 [2024-11-16 16:42:23.415730] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.162 [2024-11-16 16:42:23.415845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.162 [2024-11-16 16:42:23.415866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.162 [2024-11-16 16:42:23.419607] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.162 [2024-11-16 16:42:23.419722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.162 [2024-11-16 16:42:23.419742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.162 [2024-11-16 16:42:23.423609] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.162 [2024-11-16 16:42:23.423778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.162 [2024-11-16 16:42:23.423799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.162 [2024-11-16 16:42:23.427566] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.162 [2024-11-16 16:42:23.427768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.162 [2024-11-16 16:42:23.427788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.162 [2024-11-16 16:42:23.431575] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.162 [2024-11-16 16:42:23.431775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.162 [2024-11-16 16:42:23.431796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.162 [2024-11-16 16:42:23.435549] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.162 [2024-11-16 16:42:23.435674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.162 [2024-11-16 16:42:23.435694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.162 [2024-11-16 16:42:23.439437] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.162 [2024-11-16 16:42:23.439528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.162 [2024-11-16 16:42:23.439549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.162 [2024-11-16 16:42:23.443390] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.162 [2024-11-16 16:42:23.443540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.162 [2024-11-16 16:42:23.443561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.162 [2024-11-16 16:42:23.447297] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.162 [2024-11-16 16:42:23.447408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.162 [2024-11-16 16:42:23.447428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.162 [2024-11-16 16:42:23.451209] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.162 [2024-11-16 16:42:23.451304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.162 [2024-11-16 16:42:23.451325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.162 [2024-11-16 16:42:23.455203] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.162 [2024-11-16 16:42:23.455380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.162 [2024-11-16 16:42:23.455401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.162 [2024-11-16 16:42:23.459042] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.162 [2024-11-16 16:42:23.459227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.162 [2024-11-16 16:42:23.459249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.162 [2024-11-16 16:42:23.462912] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.162 [2024-11-16 16:42:23.463083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.162 [2024-11-16 16:42:23.463105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.162 [2024-11-16 16:42:23.466790] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.162 [2024-11-16 16:42:23.466974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.162 [2024-11-16 16:42:23.466995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.162 [2024-11-16 16:42:23.470689] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.162 [2024-11-16 16:42:23.470837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.162 [2024-11-16 16:42:23.470858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.162 [2024-11-16 16:42:23.474742] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.162 [2024-11-16 16:42:23.474880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.162 [2024-11-16 16:42:23.474901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.162 [2024-11-16 16:42:23.478617] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.162 [2024-11-16 16:42:23.478740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.162 [2024-11-16 16:42:23.478760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.162 [2024-11-16 16:42:23.482576] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.162 [2024-11-16 16:42:23.482697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.162 [2024-11-16 16:42:23.482718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.162 [2024-11-16 16:42:23.486511] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.162 [2024-11-16 16:42:23.486674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.162 [2024-11-16 16:42:23.486694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.162 [2024-11-16 16:42:23.490400] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.162 [2024-11-16 16:42:23.490725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.162 [2024-11-16 16:42:23.490764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.162 [2024-11-16 16:42:23.494305] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.162 [2024-11-16 16:42:23.494491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.162 [2024-11-16 16:42:23.494512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.163 [2024-11-16 16:42:23.498245] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.163 [2024-11-16 16:42:23.498415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.163 [2024-11-16 16:42:23.498436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.163 [2024-11-16 16:42:23.502129] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.163 [2024-11-16 16:42:23.502257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.163 [2024-11-16 16:42:23.502277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.163 [2024-11-16 16:42:23.506120] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.163 [2024-11-16 16:42:23.506262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.163 [2024-11-16 16:42:23.506285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.163 [2024-11-16 16:42:23.510035] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.163 [2024-11-16 16:42:23.510186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.163 [2024-11-16 16:42:23.510207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.163 [2024-11-16 16:42:23.514315] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.163 [2024-11-16 16:42:23.514424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.163 [2024-11-16 16:42:23.514459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.163 [2024-11-16 16:42:23.518314] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.163 [2024-11-16 16:42:23.518481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.163 [2024-11-16 16:42:23.518501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.163 [2024-11-16 16:42:23.522271] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.163 [2024-11-16 16:42:23.522602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.163 [2024-11-16 16:42:23.522640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.163 [2024-11-16 16:42:23.526138] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.163 [2024-11-16 16:42:23.526277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.163 [2024-11-16 16:42:23.526297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.163 [2024-11-16 16:42:23.530139] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.163 [2024-11-16 16:42:23.530291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.163 [2024-11-16 16:42:23.530311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.163 [2024-11-16 16:42:23.534011] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.163 [2024-11-16 16:42:23.534130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.163 [2024-11-16 16:42:23.534151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.163 [2024-11-16 16:42:23.537942] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.163 [2024-11-16 16:42:23.538038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.163 [2024-11-16 16:42:23.538059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.163 [2024-11-16 16:42:23.541951] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.163 [2024-11-16 16:42:23.542086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.163 [2024-11-16 16:42:23.542121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.163 [2024-11-16 16:42:23.545860] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.163 [2024-11-16 16:42:23.545981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.163 [2024-11-16 16:42:23.546002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.163 [2024-11-16 16:42:23.549804] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.163 [2024-11-16 16:42:23.549973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.163 [2024-11-16 16:42:23.549994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.163 [2024-11-16 16:42:23.553777] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.163 [2024-11-16 16:42:23.554053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.163 [2024-11-16 16:42:23.554090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.163 [2024-11-16 16:42:23.557650] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.163 [2024-11-16 16:42:23.557772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.163 [2024-11-16 16:42:23.557793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.163 [2024-11-16 16:42:23.561630] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.163 [2024-11-16 16:42:23.561805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.163 [2024-11-16 16:42:23.561825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.163 [2024-11-16 16:42:23.565566] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.163 [2024-11-16 16:42:23.565661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.163 [2024-11-16 16:42:23.565682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.163 [2024-11-16 16:42:23.569679] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.163 [2024-11-16 16:42:23.569863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.163 [2024-11-16 16:42:23.569883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.163 [2024-11-16 16:42:23.573585] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.163 [2024-11-16 16:42:23.573701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.163 [2024-11-16 16:42:23.573722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.163 [2024-11-16 16:42:23.577501] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.163 [2024-11-16 16:42:23.577637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.163 [2024-11-16 16:42:23.577657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.163 [2024-11-16 16:42:23.581507] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.163 [2024-11-16 16:42:23.581673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.163 [2024-11-16 16:42:23.581694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.163 [2024-11-16 16:42:23.585405] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.163 [2024-11-16 16:42:23.585716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.163 [2024-11-16 16:42:23.585738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.163 [2024-11-16 16:42:23.589321] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.163 [2024-11-16 16:42:23.589437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.163 [2024-11-16 16:42:23.589472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.163 [2024-11-16 16:42:23.593399] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.163 [2024-11-16 16:42:23.593590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.163 [2024-11-16 16:42:23.593611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.163 [2024-11-16 16:42:23.597286] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.163 [2024-11-16 16:42:23.597382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.163 [2024-11-16 16:42:23.597403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.163 [2024-11-16 16:42:23.601255] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.163 [2024-11-16 16:42:23.601431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.164 [2024-11-16 16:42:23.601452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.164 [2024-11-16 16:42:23.605199] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.164 [2024-11-16 16:42:23.605352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.164 [2024-11-16 16:42:23.605373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.164 [2024-11-16 16:42:23.609090] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.164 [2024-11-16 16:42:23.609189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.164 [2024-11-16 16:42:23.609225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.164 [2024-11-16 16:42:23.612947] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.164 [2024-11-16 16:42:23.613131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.164 [2024-11-16 16:42:23.613152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.164 [2024-11-16 16:42:23.616897] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.164 [2024-11-16 16:42:23.617153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.164 [2024-11-16 16:42:23.617217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.164 [2024-11-16 16:42:23.620817] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.164 [2024-11-16 16:42:23.620916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.164 [2024-11-16 16:42:23.620937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.164 [2024-11-16 16:42:23.624855] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.164 [2024-11-16 16:42:23.625019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.164 [2024-11-16 16:42:23.625039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.164 [2024-11-16 16:42:23.628769] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.164 [2024-11-16 16:42:23.628946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.164 [2024-11-16 16:42:23.628966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.164 [2024-11-16 16:42:23.632734] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.164 [2024-11-16 16:42:23.632879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.164 [2024-11-16 16:42:23.632899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.164 [2024-11-16 16:42:23.636571] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.164 [2024-11-16 16:42:23.636691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.164 [2024-11-16 16:42:23.636712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.164 [2024-11-16 16:42:23.640588] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.164 [2024-11-16 16:42:23.640687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.164 [2024-11-16 16:42:23.640709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.164 [2024-11-16 16:42:23.644549] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.164 [2024-11-16 16:42:23.644723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.164 [2024-11-16 16:42:23.644744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.164 [2024-11-16 16:42:23.648482] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.164 [2024-11-16 16:42:23.648732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.164 [2024-11-16 16:42:23.648770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.424 [2024-11-16 16:42:23.652504] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.424 [2024-11-16 16:42:23.652602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.424 [2024-11-16 16:42:23.652626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.424 [2024-11-16 16:42:23.656548] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.424 [2024-11-16 16:42:23.656771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.424 [2024-11-16 16:42:23.656794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.424 [2024-11-16 16:42:23.660471] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.424 [2024-11-16 16:42:23.660575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.424 [2024-11-16 16:42:23.660597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.424 [2024-11-16 16:42:23.664422] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.424 [2024-11-16 16:42:23.664566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.424 [2024-11-16 16:42:23.664588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.424 [2024-11-16 16:42:23.668281] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.424 [2024-11-16 16:42:23.668400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.424 [2024-11-16 16:42:23.668420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.424 [2024-11-16 16:42:23.672202] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.424 [2024-11-16 16:42:23.672313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.424 [2024-11-16 16:42:23.672334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.424 [2024-11-16 16:42:23.676084] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.424 [2024-11-16 16:42:23.676260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.424 [2024-11-16 16:42:23.676281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.424 [2024-11-16 16:42:23.679924] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.424 [2024-11-16 16:42:23.680173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.424 [2024-11-16 16:42:23.680194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.424 [2024-11-16 16:42:23.683844] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.424 [2024-11-16 16:42:23.683945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.424 [2024-11-16 16:42:23.683966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.424 [2024-11-16 16:42:23.687851] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.424 [2024-11-16 16:42:23.687963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.424 [2024-11-16 16:42:23.687984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.424 [2024-11-16 16:42:23.691794] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.424 [2024-11-16 16:42:23.691898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.424 [2024-11-16 16:42:23.691919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.424 [2024-11-16 16:42:23.695778] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.424 [2024-11-16 16:42:23.695938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.424 [2024-11-16 16:42:23.695959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.424 [2024-11-16 16:42:23.699698] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.424 [2024-11-16 16:42:23.699825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.424 [2024-11-16 16:42:23.699848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.424 [2024-11-16 16:42:23.703581] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.424 [2024-11-16 16:42:23.703674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.424 [2024-11-16 16:42:23.703695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.424 [2024-11-16 16:42:23.707621] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.424 [2024-11-16 16:42:23.707788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.424 [2024-11-16 16:42:23.707809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.424 [2024-11-16 16:42:23.711599] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.424 [2024-11-16 16:42:23.711841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.424 [2024-11-16 16:42:23.711900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.424 [2024-11-16 16:42:23.715655] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.424 [2024-11-16 16:42:23.715863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.424 [2024-11-16 16:42:23.715899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.424 [2024-11-16 16:42:23.719758] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.424 [2024-11-16 16:42:23.719962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.424 [2024-11-16 16:42:23.719983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.424 [2024-11-16 16:42:23.723899] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.424 [2024-11-16 16:42:23.724132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.424 [2024-11-16 16:42:23.724156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.425 [2024-11-16 16:42:23.727855] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.425 [2024-11-16 16:42:23.727995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.425 [2024-11-16 16:42:23.728015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.425 [2024-11-16 16:42:23.731962] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.425 [2024-11-16 16:42:23.732112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.425 [2024-11-16 16:42:23.732134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.425 [2024-11-16 16:42:23.735971] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.425 [2024-11-16 16:42:23.736108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.425 [2024-11-16 16:42:23.736132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.425 [2024-11-16 16:42:23.740029] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.425 [2024-11-16 16:42:23.740230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.425 [2024-11-16 16:42:23.740254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.425 [2024-11-16 16:42:23.744172] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.425 [2024-11-16 16:42:23.744396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.425 [2024-11-16 16:42:23.744455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.425 [2024-11-16 16:42:23.748079] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.425 [2024-11-16 16:42:23.748180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.425 [2024-11-16 16:42:23.748218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.425 [2024-11-16 16:42:23.752190] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.425 [2024-11-16 16:42:23.752391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.425 [2024-11-16 16:42:23.752427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.425 [2024-11-16 16:42:23.756194] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.425 [2024-11-16 16:42:23.756322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.425 [2024-11-16 16:42:23.756343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.425 [2024-11-16 16:42:23.760079] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.425 [2024-11-16 16:42:23.760293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.425 [2024-11-16 16:42:23.760314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.425 [2024-11-16 16:42:23.764013] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.425 [2024-11-16 16:42:23.764198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.425 [2024-11-16 16:42:23.764220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.425 [2024-11-16 16:42:23.767914] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.425 [2024-11-16 16:42:23.768035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.425 [2024-11-16 16:42:23.768055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.425 [2024-11-16 16:42:23.771924] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.425 [2024-11-16 16:42:23.772104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.425 [2024-11-16 16:42:23.772125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.425 [2024-11-16 16:42:23.775880] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.425 [2024-11-16 16:42:23.776162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.425 [2024-11-16 16:42:23.776225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.425 [2024-11-16 16:42:23.779741] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.425 [2024-11-16 16:42:23.779840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.425 [2024-11-16 16:42:23.779861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.425 [2024-11-16 16:42:23.783832] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.425 [2024-11-16 16:42:23.784016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.425 [2024-11-16 16:42:23.784037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.425 [2024-11-16 16:42:23.787720] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.425 [2024-11-16 16:42:23.787808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.425 [2024-11-16 16:42:23.787829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.425 [2024-11-16 16:42:23.791706] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.425 [2024-11-16 16:42:23.791879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.425 [2024-11-16 16:42:23.791901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.425 [2024-11-16 16:42:23.795659] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.425 [2024-11-16 16:42:23.795767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.425 [2024-11-16 16:42:23.795788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.425 [2024-11-16 16:42:23.799562] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.425 [2024-11-16 16:42:23.799665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.425 [2024-11-16 16:42:23.799687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.425 [2024-11-16 16:42:23.803459] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.425 [2024-11-16 16:42:23.803622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.425 [2024-11-16 16:42:23.803643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.425 [2024-11-16 16:42:23.807513] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.425 [2024-11-16 16:42:23.807711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.425 [2024-11-16 16:42:23.807731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.425 [2024-11-16 16:42:23.811555] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.425 [2024-11-16 16:42:23.811755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.425 [2024-11-16 16:42:23.811777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.425 [2024-11-16 16:42:23.815466] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.425 [2024-11-16 16:42:23.815614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.425 [2024-11-16 16:42:23.815635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.425 [2024-11-16 16:42:23.819379] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.425 [2024-11-16 16:42:23.819556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.425 [2024-11-16 16:42:23.819577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.426 [2024-11-16 16:42:23.823277] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.426 [2024-11-16 16:42:23.823428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.426 [2024-11-16 16:42:23.823449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.426 [2024-11-16 16:42:23.827189] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.426 [2024-11-16 16:42:23.827284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.426 [2024-11-16 16:42:23.827305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.426 [2024-11-16 16:42:23.831156] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.426 [2024-11-16 16:42:23.831272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.426 [2024-11-16 16:42:23.831292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.426 [2024-11-16 16:42:23.835149] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.426 [2024-11-16 16:42:23.835307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.426 [2024-11-16 16:42:23.835328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.426 [2024-11-16 16:42:23.838894] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.426 [2024-11-16 16:42:23.839017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.426 [2024-11-16 16:42:23.839037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.426 [2024-11-16 16:42:23.842800] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.426 [2024-11-16 16:42:23.842917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.426 [2024-11-16 16:42:23.842938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.426 [2024-11-16 16:42:23.846825] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.426 [2024-11-16 16:42:23.846993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.426 [2024-11-16 16:42:23.847013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.426 [2024-11-16 16:42:23.850713] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.426 [2024-11-16 16:42:23.850949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.426 [2024-11-16 16:42:23.850969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.426 [2024-11-16 16:42:23.854581] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.426 [2024-11-16 16:42:23.854718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.426 [2024-11-16 16:42:23.854739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.426 [2024-11-16 16:42:23.858658] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.426 [2024-11-16 16:42:23.858792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.426 [2024-11-16 16:42:23.858812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.426 [2024-11-16 16:42:23.862614] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.426 [2024-11-16 16:42:23.862717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.426 [2024-11-16 16:42:23.862739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.426 [2024-11-16 16:42:23.866583] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.426 [2024-11-16 16:42:23.866759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.426 [2024-11-16 16:42:23.866780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.426 [2024-11-16 16:42:23.870501] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.426 [2024-11-16 16:42:23.870617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.426 [2024-11-16 16:42:23.870638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.426 [2024-11-16 16:42:23.874521] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.426 [2024-11-16 16:42:23.874642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.426 [2024-11-16 16:42:23.874663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.426 [2024-11-16 16:42:23.878476] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.426 [2024-11-16 16:42:23.878638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.426 [2024-11-16 16:42:23.878658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.426 [2024-11-16 16:42:23.882336] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.426 [2024-11-16 16:42:23.882680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.426 [2024-11-16 16:42:23.882719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.426 [2024-11-16 16:42:23.886190] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.426 [2024-11-16 16:42:23.886394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.426 [2024-11-16 16:42:23.886415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.426 [2024-11-16 16:42:23.890038] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.426 [2024-11-16 16:42:23.890165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.426 [2024-11-16 16:42:23.890187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.426 [2024-11-16 16:42:23.893936] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.426 [2024-11-16 16:42:23.894035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.426 [2024-11-16 16:42:23.894056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.426 [2024-11-16 16:42:23.897966] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.426 [2024-11-16 16:42:23.898057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.426 [2024-11-16 16:42:23.898079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.426 [2024-11-16 16:42:23.901927] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.426 [2024-11-16 16:42:23.902075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.426 [2024-11-16 16:42:23.902096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.426 [2024-11-16 16:42:23.905881] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.426 [2024-11-16 16:42:23.906000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.426 [2024-11-16 16:42:23.906021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.426 [2024-11-16 16:42:23.909890] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.426 [2024-11-16 16:42:23.910128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.426 [2024-11-16 16:42:23.910152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.687 [2024-11-16 16:42:23.913963] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.687 [2024-11-16 16:42:23.914170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.687 [2024-11-16 16:42:23.914193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.687 [2024-11-16 16:42:23.917936] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.687 [2024-11-16 16:42:23.918120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.687 [2024-11-16 16:42:23.918143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.687 [2024-11-16 16:42:23.921964] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.687 [2024-11-16 16:42:23.922166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.687 [2024-11-16 16:42:23.922189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.687 [2024-11-16 16:42:23.925843] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.687 [2024-11-16 16:42:23.925965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.687 [2024-11-16 16:42:23.925986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.687 [2024-11-16 16:42:23.929843] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.687 [2024-11-16 16:42:23.929986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.687 [2024-11-16 16:42:23.930007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.687 [2024-11-16 16:42:23.933806] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.687 [2024-11-16 16:42:23.933907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.687 [2024-11-16 16:42:23.933929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.687 [2024-11-16 16:42:23.937704] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.687 [2024-11-16 16:42:23.937824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.687 [2024-11-16 16:42:23.937847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.687 [2024-11-16 16:42:23.941639] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.687 [2024-11-16 16:42:23.941804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.687 [2024-11-16 16:42:23.941825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.687 [2024-11-16 16:42:23.945561] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.687 [2024-11-16 16:42:23.945879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.687 [2024-11-16 16:42:23.945928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.687 [2024-11-16 16:42:23.949393] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.687 [2024-11-16 16:42:23.949485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.687 [2024-11-16 16:42:23.949506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.687 [2024-11-16 16:42:23.953377] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.687 [2024-11-16 16:42:23.953552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.687 [2024-11-16 16:42:23.953588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.687 [2024-11-16 16:42:23.957287] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.687 [2024-11-16 16:42:23.957465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.687 [2024-11-16 16:42:23.957488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.687 [2024-11-16 16:42:23.961197] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.687 [2024-11-16 16:42:23.961381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.687 [2024-11-16 16:42:23.961403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.687 [2024-11-16 16:42:23.965114] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.687 [2024-11-16 16:42:23.965282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.687 [2024-11-16 16:42:23.965303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.687 [2024-11-16 16:42:23.969015] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.687 [2024-11-16 16:42:23.969140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.687 [2024-11-16 16:42:23.969164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.687 [2024-11-16 16:42:23.972996] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.687 [2024-11-16 16:42:23.973206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.687 [2024-11-16 16:42:23.973244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.687 [2024-11-16 16:42:23.976915] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.687 [2024-11-16 16:42:23.977080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.687 [2024-11-16 16:42:23.977113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.687 [2024-11-16 16:42:23.980871] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.687 [2024-11-16 16:42:23.980980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.687 [2024-11-16 16:42:23.981001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.687 [2024-11-16 16:42:23.984975] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.687 [2024-11-16 16:42:23.985112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.687 [2024-11-16 16:42:23.985133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.687 [2024-11-16 16:42:23.988927] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.687 [2024-11-16 16:42:23.989048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.687 [2024-11-16 16:42:23.989081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.687 [2024-11-16 16:42:23.992817] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.687 [2024-11-16 16:42:23.992966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.687 [2024-11-16 16:42:23.992987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.687 [2024-11-16 16:42:23.996687] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.687 [2024-11-16 16:42:23.996821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.687 [2024-11-16 16:42:23.996842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.687 [2024-11-16 16:42:24.000728] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.687 [2024-11-16 16:42:24.000833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.687 [2024-11-16 16:42:24.000855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.687 [2024-11-16 16:42:24.004702] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.687 [2024-11-16 16:42:24.004874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.687 [2024-11-16 16:42:24.004895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.687 [2024-11-16 16:42:24.008697] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.687 [2024-11-16 16:42:24.008982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.687 [2024-11-16 16:42:24.009030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.687 [2024-11-16 16:42:24.012573] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.687 [2024-11-16 16:42:24.012701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.688 [2024-11-16 16:42:24.012722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.688 [2024-11-16 16:42:24.016548] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.688 [2024-11-16 16:42:24.016716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.688 [2024-11-16 16:42:24.016736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.688 [2024-11-16 16:42:24.020481] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.688 [2024-11-16 16:42:24.020736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.688 [2024-11-16 16:42:24.020813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.688 [2024-11-16 16:42:24.024437] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.688 [2024-11-16 16:42:24.024562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.688 [2024-11-16 16:42:24.024582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.688 [2024-11-16 16:42:24.028404] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.688 [2024-11-16 16:42:24.028593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.688 [2024-11-16 16:42:24.028614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.688 [2024-11-16 16:42:24.032302] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.688 [2024-11-16 16:42:24.032395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.688 [2024-11-16 16:42:24.032415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.688 [2024-11-16 16:42:24.036235] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.688 [2024-11-16 16:42:24.036378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.688 [2024-11-16 16:42:24.036398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.688 [2024-11-16 16:42:24.040029] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.688 [2024-11-16 16:42:24.040227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.688 [2024-11-16 16:42:24.040248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.688 [2024-11-16 16:42:24.043907] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.688 [2024-11-16 16:42:24.043995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.688 [2024-11-16 16:42:24.044015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.688 [2024-11-16 16:42:24.047924] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.688 [2024-11-16 16:42:24.048107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.688 [2024-11-16 16:42:24.048127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.688 [2024-11-16 16:42:24.051828] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.688 [2024-11-16 16:42:24.052020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.688 [2024-11-16 16:42:24.052040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.688 [2024-11-16 16:42:24.055725] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.688 [2024-11-16 16:42:24.055904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.688 [2024-11-16 16:42:24.055925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.688 [2024-11-16 16:42:24.059655] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.688 [2024-11-16 16:42:24.059838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.688 [2024-11-16 16:42:24.059858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.688 [2024-11-16 16:42:24.063617] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.688 [2024-11-16 16:42:24.063723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.688 [2024-11-16 16:42:24.063744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.688 [2024-11-16 16:42:24.067579] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.688 [2024-11-16 16:42:24.067722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.688 [2024-11-16 16:42:24.067742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.688 [2024-11-16 16:42:24.071474] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.688 [2024-11-16 16:42:24.071594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.688 [2024-11-16 16:42:24.071615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.688 [2024-11-16 16:42:24.075360] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.688 [2024-11-16 16:42:24.075463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.688 [2024-11-16 16:42:24.075484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.688 [2024-11-16 16:42:24.079331] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.688 [2024-11-16 16:42:24.079497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.688 [2024-11-16 16:42:24.079517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.688 [2024-11-16 16:42:24.083219] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.688 [2024-11-16 16:42:24.083503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.688 [2024-11-16 16:42:24.083536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.688 [2024-11-16 16:42:24.087231] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.688 [2024-11-16 16:42:24.087385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.688 [2024-11-16 16:42:24.087405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.688 [2024-11-16 16:42:24.091241] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.688 [2024-11-16 16:42:24.091407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.688 [2024-11-16 16:42:24.091428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.688 [2024-11-16 16:42:24.095133] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.688 [2024-11-16 16:42:24.095246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.688 [2024-11-16 16:42:24.095266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.688 [2024-11-16 16:42:24.099121] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.688 [2024-11-16 16:42:24.099269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.688 [2024-11-16 16:42:24.099290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.688 [2024-11-16 16:42:24.103076] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.688 [2024-11-16 16:42:24.103174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.688 [2024-11-16 16:42:24.103195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.688 [2024-11-16 16:42:24.106989] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.688 [2024-11-16 16:42:24.107123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.688 [2024-11-16 16:42:24.107144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.688 [2024-11-16 16:42:24.110934] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.688 [2024-11-16 16:42:24.111117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.688 [2024-11-16 16:42:24.111138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.688 [2024-11-16 16:42:24.114904] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.688 [2024-11-16 16:42:24.115101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.688 [2024-11-16 16:42:24.115123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.688 [2024-11-16 16:42:24.118861] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.688 [2024-11-16 16:42:24.119052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.689 [2024-11-16 16:42:24.119087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.689 [2024-11-16 16:42:24.122823] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.689 [2024-11-16 16:42:24.122959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.689 [2024-11-16 16:42:24.122979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.689 [2024-11-16 16:42:24.126755] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.689 [2024-11-16 16:42:24.126858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.689 [2024-11-16 16:42:24.126880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.689 [2024-11-16 16:42:24.130717] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.689 [2024-11-16 16:42:24.130859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.689 [2024-11-16 16:42:24.130881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.689 [2024-11-16 16:42:24.134705] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.689 [2024-11-16 16:42:24.134800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.689 [2024-11-16 16:42:24.134822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.689 [2024-11-16 16:42:24.138615] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.689 [2024-11-16 16:42:24.138715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.689 [2024-11-16 16:42:24.138737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.689 [2024-11-16 16:42:24.142594] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.689 [2024-11-16 16:42:24.142761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.689 [2024-11-16 16:42:24.142782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.689 [2024-11-16 16:42:24.146528] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.689 [2024-11-16 16:42:24.146775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.689 [2024-11-16 16:42:24.146832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.689 [2024-11-16 16:42:24.150424] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.689 [2024-11-16 16:42:24.150567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.689 [2024-11-16 16:42:24.150587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.689 [2024-11-16 16:42:24.154360] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.689 [2024-11-16 16:42:24.154496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.689 [2024-11-16 16:42:24.154516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.689 [2024-11-16 16:42:24.158262] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.689 [2024-11-16 16:42:24.158364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.689 [2024-11-16 16:42:24.158384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.689 [2024-11-16 16:42:24.162303] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.689 [2024-11-16 16:42:24.162440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.689 [2024-11-16 16:42:24.162461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.689 [2024-11-16 16:42:24.166219] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.689 [2024-11-16 16:42:24.166325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.689 [2024-11-16 16:42:24.166346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.689 [2024-11-16 16:42:24.170196] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.689 [2024-11-16 16:42:24.170320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.689 [2024-11-16 16:42:24.170342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.689 [2024-11-16 16:42:24.174187] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.689 [2024-11-16 16:42:24.174362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.689 [2024-11-16 16:42:24.174393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.949 [2024-11-16 16:42:24.178176] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.949 [2024-11-16 16:42:24.178497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.949 [2024-11-16 16:42:24.178538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.949 [2024-11-16 16:42:24.182146] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.949 [2024-11-16 16:42:24.182244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.949 [2024-11-16 16:42:24.182267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.949 [2024-11-16 16:42:24.186130] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.949 [2024-11-16 16:42:24.186287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.949 [2024-11-16 16:42:24.186324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.949 [2024-11-16 16:42:24.190089] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.949 [2024-11-16 16:42:24.190354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.949 [2024-11-16 16:42:24.190389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.949 [2024-11-16 16:42:24.194023] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.949 [2024-11-16 16:42:24.194141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.949 [2024-11-16 16:42:24.194163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.949 [2024-11-16 16:42:24.198032] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.949 [2024-11-16 16:42:24.198171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.949 [2024-11-16 16:42:24.198191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.949 [2024-11-16 16:42:24.201914] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.949 [2024-11-16 16:42:24.202070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.949 [2024-11-16 16:42:24.202091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.949 [2024-11-16 16:42:24.205858] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.950 [2024-11-16 16:42:24.206047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.950 [2024-11-16 16:42:24.206069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.950 [2024-11-16 16:42:24.209795] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.950 [2024-11-16 16:42:24.209933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.950 [2024-11-16 16:42:24.209953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.950 [2024-11-16 16:42:24.213716] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.950 [2024-11-16 16:42:24.213824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.950 [2024-11-16 16:42:24.213845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.950 [2024-11-16 16:42:24.217653] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.950 [2024-11-16 16:42:24.217816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.950 [2024-11-16 16:42:24.217836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.950 [2024-11-16 16:42:24.221540] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.950 [2024-11-16 16:42:24.221911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.950 [2024-11-16 16:42:24.221950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.950 [2024-11-16 16:42:24.225461] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.950 [2024-11-16 16:42:24.225660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.950 [2024-11-16 16:42:24.225681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.950 [2024-11-16 16:42:24.229351] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.950 [2024-11-16 16:42:24.229663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.950 [2024-11-16 16:42:24.229696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.950 [2024-11-16 16:42:24.233259] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.950 [2024-11-16 16:42:24.233452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.950 [2024-11-16 16:42:24.233476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.950 [2024-11-16 16:42:24.237111] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.950 [2024-11-16 16:42:24.237338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.950 [2024-11-16 16:42:24.237376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.950 [2024-11-16 16:42:24.241021] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.950 [2024-11-16 16:42:24.241156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.950 [2024-11-16 16:42:24.241176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.950 [2024-11-16 16:42:24.244936] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.950 [2024-11-16 16:42:24.245057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.950 [2024-11-16 16:42:24.245089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.950 [2024-11-16 16:42:24.248929] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.950 [2024-11-16 16:42:24.249114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.950 [2024-11-16 16:42:24.249135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.950 [2024-11-16 16:42:24.252789] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.950 [2024-11-16 16:42:24.252935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.950 [2024-11-16 16:42:24.252955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.950 [2024-11-16 16:42:24.256691] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.950 [2024-11-16 16:42:24.256864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.950 [2024-11-16 16:42:24.256884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.950 [2024-11-16 16:42:24.260739] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.950 [2024-11-16 16:42:24.260875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.950 [2024-11-16 16:42:24.260894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.950 [2024-11-16 16:42:24.264694] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.950 [2024-11-16 16:42:24.264867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.950 [2024-11-16 16:42:24.264887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.950 [2024-11-16 16:42:24.268663] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.950 [2024-11-16 16:42:24.268854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.950 [2024-11-16 16:42:24.268875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.950 [2024-11-16 16:42:24.272574] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.950 [2024-11-16 16:42:24.272726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.950 [2024-11-16 16:42:24.272747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.950 [2024-11-16 16:42:24.276505] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.950 [2024-11-16 16:42:24.276601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.950 [2024-11-16 16:42:24.276622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.950 [2024-11-16 16:42:24.280486] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.950 [2024-11-16 16:42:24.280654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.950 [2024-11-16 16:42:24.280674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.950 [2024-11-16 16:42:24.284381] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.950 [2024-11-16 16:42:24.284674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.950 [2024-11-16 16:42:24.284712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.950 [2024-11-16 16:42:24.288299] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.950 [2024-11-16 16:42:24.288391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.950 [2024-11-16 16:42:24.288413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.950 [2024-11-16 16:42:24.292315] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.950 [2024-11-16 16:42:24.292480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.950 [2024-11-16 16:42:24.292500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.950 [2024-11-16 16:42:24.296187] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.950 [2024-11-16 16:42:24.296281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.950 [2024-11-16 16:42:24.296302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.950 [2024-11-16 16:42:24.300111] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.950 [2024-11-16 16:42:24.300288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.950 [2024-11-16 16:42:24.300309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.950 [2024-11-16 16:42:24.304008] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.950 [2024-11-16 16:42:24.304192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.950 [2024-11-16 16:42:24.304212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.950 [2024-11-16 16:42:24.307890] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.950 [2024-11-16 16:42:24.307989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.950 [2024-11-16 16:42:24.308010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.950 [2024-11-16 16:42:24.311888] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.951 [2024-11-16 16:42:24.312056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.951 [2024-11-16 16:42:24.312089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.951 [2024-11-16 16:42:24.315871] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.951 [2024-11-16 16:42:24.316135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.951 [2024-11-16 16:42:24.316216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.951 [2024-11-16 16:42:24.319752] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.951 [2024-11-16 16:42:24.319871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.951 [2024-11-16 16:42:24.319891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.951 [2024-11-16 16:42:24.323718] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.951 [2024-11-16 16:42:24.323906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.951 [2024-11-16 16:42:24.323927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.951 [2024-11-16 16:42:24.327626] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.951 [2024-11-16 16:42:24.327793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.951 [2024-11-16 16:42:24.327814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.951 [2024-11-16 16:42:24.331536] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.951 [2024-11-16 16:42:24.331715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.951 [2024-11-16 16:42:24.331736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.951 [2024-11-16 16:42:24.335591] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.951 [2024-11-16 16:42:24.335699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.951 [2024-11-16 16:42:24.335720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.951 [2024-11-16 16:42:24.339567] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.951 [2024-11-16 16:42:24.339667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.951 [2024-11-16 16:42:24.339688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.951 [2024-11-16 16:42:24.343564] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.951 [2024-11-16 16:42:24.343727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.951 [2024-11-16 16:42:24.343748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.951 [2024-11-16 16:42:24.347481] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.951 [2024-11-16 16:42:24.347799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.951 [2024-11-16 16:42:24.347822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.951 [2024-11-16 16:42:24.351407] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.951 [2024-11-16 16:42:24.351518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.951 [2024-11-16 16:42:24.351539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.951 [2024-11-16 16:42:24.355397] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.951 [2024-11-16 16:42:24.355573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.951 [2024-11-16 16:42:24.355593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.951 [2024-11-16 16:42:24.359253] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.951 [2024-11-16 16:42:24.359350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.951 [2024-11-16 16:42:24.359371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.951 [2024-11-16 16:42:24.363208] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.951 [2024-11-16 16:42:24.363299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.951 [2024-11-16 16:42:24.363320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.951 [2024-11-16 16:42:24.367212] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.951 [2024-11-16 16:42:24.367315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.951 [2024-11-16 16:42:24.367336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.951 [2024-11-16 16:42:24.371105] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.951 [2024-11-16 16:42:24.371220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.951 [2024-11-16 16:42:24.371240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.951 [2024-11-16 16:42:24.375109] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.951 [2024-11-16 16:42:24.375278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.951 [2024-11-16 16:42:24.375299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.951 [2024-11-16 16:42:24.379048] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.951 [2024-11-16 16:42:24.379316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.951 [2024-11-16 16:42:24.379394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.951 [2024-11-16 16:42:24.382917] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.951 [2024-11-16 16:42:24.383037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.951 [2024-11-16 16:42:24.383057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.951 [2024-11-16 16:42:24.386989] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.951 [2024-11-16 16:42:24.387171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.951 [2024-11-16 16:42:24.387192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.951 [2024-11-16 16:42:24.390927] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.951 [2024-11-16 16:42:24.391138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.951 [2024-11-16 16:42:24.391161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.951 [2024-11-16 16:42:24.394948] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.951 [2024-11-16 16:42:24.395111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.951 [2024-11-16 16:42:24.395131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.951 [2024-11-16 16:42:24.398798] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.951 [2024-11-16 16:42:24.398987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.951 [2024-11-16 16:42:24.399009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.951 [2024-11-16 16:42:24.402877] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.951 [2024-11-16 16:42:24.402968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.951 [2024-11-16 16:42:24.402990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.951 [2024-11-16 16:42:24.406903] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.951 [2024-11-16 16:42:24.407081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.951 [2024-11-16 16:42:24.407101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.951 [2024-11-16 16:42:24.410856] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.951 [2024-11-16 16:42:24.411164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.951 [2024-11-16 16:42:24.411197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.951 [2024-11-16 16:42:24.414895] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.951 [2024-11-16 16:42:24.415022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.951 [2024-11-16 16:42:24.415043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.951 [2024-11-16 16:42:24.418988] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.952 [2024-11-16 16:42:24.419194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.952 [2024-11-16 16:42:24.419215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:46.952 [2024-11-16 16:42:24.422924] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.952 [2024-11-16 16:42:24.423039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.952 [2024-11-16 16:42:24.423077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:46.952 [2024-11-16 16:42:24.427079] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.952 [2024-11-16 16:42:24.427229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.952 [2024-11-16 16:42:24.427252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:46.952 [2024-11-16 16:42:24.431141] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.952 [2024-11-16 16:42:24.431292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.952 [2024-11-16 16:42:24.431315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:46.952 [2024-11-16 16:42:24.435185] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:46.952 [2024-11-16 16:42:24.435343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:46.952 [2024-11-16 16:42:24.435375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:47.221 [2024-11-16 16:42:24.439285] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:47.221 [2024-11-16 16:42:24.439384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.221 [2024-11-16 16:42:24.439426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:47.221 [2024-11-16 16:42:24.443371] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:47.221 [2024-11-16 16:42:24.443642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.221 [2024-11-16 16:42:24.443703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.221 [2024-11-16 16:42:24.447402] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:47.221 [2024-11-16 16:42:24.447655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.221 [2024-11-16 16:42:24.447679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:47.221 [2024-11-16 16:42:24.451470] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:47.221 [2024-11-16 16:42:24.451653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.221 [2024-11-16 16:42:24.451675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:47.221 [2024-11-16 16:42:24.455525] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:47.221 [2024-11-16 16:42:24.455651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.221 [2024-11-16 16:42:24.455674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:47.221 [2024-11-16 16:42:24.459555] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:47.221 [2024-11-16 16:42:24.459748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.221 [2024-11-16 16:42:24.459772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.221 [2024-11-16 16:42:24.463594] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:47.221 [2024-11-16 16:42:24.463779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.221 [2024-11-16 16:42:24.463804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:47.221 [2024-11-16 16:42:24.467726] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:47.221 [2024-11-16 16:42:24.467863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.221 [2024-11-16 16:42:24.467886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:47.221 [2024-11-16 16:42:24.471792] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:47.221 [2024-11-16 16:42:24.471947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.221 [2024-11-16 16:42:24.471968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:47.221 [2024-11-16 16:42:24.475697] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:47.221 [2024-11-16 16:42:24.475971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.221 [2024-11-16 16:42:24.476011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.221 [2024-11-16 16:42:24.479652] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:47.221 [2024-11-16 16:42:24.479783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.221 [2024-11-16 16:42:24.479805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:47.221 [2024-11-16 16:42:24.483681] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:47.221 [2024-11-16 16:42:24.483826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.221 [2024-11-16 16:42:24.483847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:47.221 [2024-11-16 16:42:24.487645] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:47.221 [2024-11-16 16:42:24.487752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.221 [2024-11-16 16:42:24.487773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:47.221 [2024-11-16 16:42:24.491525] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:47.221 [2024-11-16 16:42:24.491686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.221 [2024-11-16 16:42:24.491707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.221 [2024-11-16 16:42:24.495501] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:47.221 [2024-11-16 16:42:24.495631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.221 [2024-11-16 16:42:24.495652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:47.221 [2024-11-16 16:42:24.499413] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:47.221 [2024-11-16 16:42:24.499530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.221 [2024-11-16 16:42:24.499551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:47.221 [2024-11-16 16:42:24.503328] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:47.221 [2024-11-16 16:42:24.503494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.221 [2024-11-16 16:42:24.503514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:47.221 [2024-11-16 16:42:24.507222] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:47.221 [2024-11-16 16:42:24.507472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.221 [2024-11-16 16:42:24.507511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.221 [2024-11-16 16:42:24.511217] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:47.221 [2024-11-16 16:42:24.511323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.221 [2024-11-16 16:42:24.511344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:47.221 [2024-11-16 16:42:24.515243] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:47.221 [2024-11-16 16:42:24.515438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.221 [2024-11-16 16:42:24.515459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:47.221 [2024-11-16 16:42:24.519192] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:47.221 [2024-11-16 16:42:24.519285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.221 [2024-11-16 16:42:24.519306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:47.221 [2024-11-16 16:42:24.523157] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:47.221 [2024-11-16 16:42:24.523295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.221 [2024-11-16 16:42:24.523316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.221 [2024-11-16 16:42:24.527076] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:47.221 [2024-11-16 16:42:24.527202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.221 [2024-11-16 16:42:24.527222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:47.221 [2024-11-16 16:42:24.530933] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:47.221 [2024-11-16 16:42:24.531050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.221 [2024-11-16 16:42:24.531081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:47.222 [2024-11-16 16:42:24.534860] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:47.222 [2024-11-16 16:42:24.535032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.222 [2024-11-16 16:42:24.535053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:47.222 [2024-11-16 16:42:24.538910] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:47.222 [2024-11-16 16:42:24.539129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.222 [2024-11-16 16:42:24.539150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.222 [2024-11-16 16:42:24.542848] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:47.222 [2024-11-16 16:42:24.543060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.222 [2024-11-16 16:42:24.543083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:47.222 [2024-11-16 16:42:24.546798] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:47.222 [2024-11-16 16:42:24.546946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.222 [2024-11-16 16:42:24.546967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:47.222 [2024-11-16 16:42:24.550703] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:47.222 [2024-11-16 16:42:24.550807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.222 [2024-11-16 16:42:24.550829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:47.222 [2024-11-16 16:42:24.554688] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:47.222 [2024-11-16 16:42:24.554864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.222 [2024-11-16 16:42:24.554884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.222 [2024-11-16 16:42:24.558548] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:47.222 [2024-11-16 16:42:24.558662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.222 [2024-11-16 16:42:24.558683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:47.222 [2024-11-16 16:42:24.562443] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:47.222 [2024-11-16 16:42:24.562549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.222 [2024-11-16 16:42:24.562571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:47.222 [2024-11-16 16:42:24.566461] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:47.222 [2024-11-16 16:42:24.566634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.222 [2024-11-16 16:42:24.566655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:47.222 [2024-11-16 16:42:24.570335] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:47.222 [2024-11-16 16:42:24.570638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.222 [2024-11-16 16:42:24.570687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.222 [2024-11-16 16:42:24.574298] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x203e420) with pdu=0x2000190fef90 00:22:47.222 [2024-11-16 16:42:24.574399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:47.222 [2024-11-16 16:42:24.574420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:47.222 00:22:47.222 Latency(us) 00:22:47.222 [2024-11-16T16:42:24.713Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:47.222 [2024-11-16T16:42:24.713Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:22:47.222 nvme0n1 : 2.00 7834.06 979.26 0.00 0.00 2037.82 1459.67 4289.63 00:22:47.222 [2024-11-16T16:42:24.713Z] =================================================================================================================== 00:22:47.222 [2024-11-16T16:42:24.713Z] Total : 7834.06 979.26 0.00 0.00 2037.82 1459.67 4289.63 00:22:47.222 0 00:22:47.222 16:42:24 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:47.222 16:42:24 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:47.222 16:42:24 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:47.222 16:42:24 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:47.222 | .driver_specific 00:22:47.222 | .nvme_error 00:22:47.222 | .status_code 00:22:47.222 | .command_transient_transport_error' 00:22:47.498 16:42:24 -- host/digest.sh@71 -- # (( 505 > 0 )) 00:22:47.498 16:42:24 -- host/digest.sh@73 -- # killprocess 98205 00:22:47.498 16:42:24 -- common/autotest_common.sh@936 -- # '[' -z 98205 ']' 00:22:47.498 16:42:24 -- common/autotest_common.sh@940 -- # kill -0 98205 00:22:47.498 16:42:24 -- common/autotest_common.sh@941 -- # uname 00:22:47.498 16:42:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:47.498 16:42:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 98205 00:22:47.498 killing process with pid 98205 00:22:47.498 Received shutdown signal, test time was about 2.000000 seconds 00:22:47.498 00:22:47.498 Latency(us) 00:22:47.498 [2024-11-16T16:42:24.989Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:47.498 [2024-11-16T16:42:24.989Z] =================================================================================================================== 00:22:47.498 [2024-11-16T16:42:24.989Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:47.498 16:42:24 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:47.498 16:42:24 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:47.498 16:42:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 98205' 00:22:47.498 16:42:24 -- common/autotest_common.sh@955 -- # kill 98205 00:22:47.498 16:42:24 -- common/autotest_common.sh@960 -- # wait 98205 00:22:47.756 16:42:25 -- host/digest.sh@115 -- # killprocess 97891 00:22:47.756 16:42:25 -- common/autotest_common.sh@936 -- # '[' -z 97891 ']' 00:22:47.756 16:42:25 -- common/autotest_common.sh@940 -- # kill -0 97891 00:22:47.756 16:42:25 -- common/autotest_common.sh@941 -- # uname 00:22:47.756 16:42:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:47.756 16:42:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97891 00:22:47.756 killing process with pid 97891 00:22:47.756 16:42:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:47.756 16:42:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:47.756 16:42:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97891' 00:22:47.756 16:42:25 -- common/autotest_common.sh@955 -- # kill 97891 00:22:47.756 16:42:25 -- common/autotest_common.sh@960 -- # wait 97891 00:22:48.014 ************************************ 00:22:48.014 END TEST nvmf_digest_error 00:22:48.014 ************************************ 00:22:48.014 00:22:48.014 real 0m18.070s 00:22:48.014 user 0m33.046s 00:22:48.014 sys 0m5.373s 00:22:48.014 16:42:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:48.014 16:42:25 -- common/autotest_common.sh@10 -- # set +x 00:22:48.014 16:42:25 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:22:48.014 16:42:25 -- host/digest.sh@139 -- # nvmftestfini 00:22:48.014 16:42:25 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:48.014 16:42:25 -- nvmf/common.sh@116 -- # sync 00:22:48.014 16:42:25 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:48.014 16:42:25 -- nvmf/common.sh@119 -- # set +e 00:22:48.014 16:42:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:48.014 16:42:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:48.014 rmmod nvme_tcp 00:22:48.014 rmmod nvme_fabrics 00:22:48.014 rmmod nvme_keyring 00:22:48.272 16:42:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:48.272 16:42:25 -- nvmf/common.sh@123 -- # set -e 00:22:48.272 16:42:25 -- nvmf/common.sh@124 -- # return 0 00:22:48.272 16:42:25 -- nvmf/common.sh@477 -- # '[' -n 97891 ']' 00:22:48.272 16:42:25 -- nvmf/common.sh@478 -- # killprocess 97891 00:22:48.272 16:42:25 -- common/autotest_common.sh@936 -- # '[' -z 97891 ']' 00:22:48.272 16:42:25 -- common/autotest_common.sh@940 -- # kill -0 97891 00:22:48.272 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (97891) - No such process 00:22:48.272 Process with pid 97891 is not found 00:22:48.272 16:42:25 -- common/autotest_common.sh@963 -- # echo 'Process with pid 97891 is not found' 00:22:48.272 16:42:25 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:48.272 16:42:25 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:48.272 16:42:25 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:48.272 16:42:25 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:48.272 16:42:25 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:48.272 16:42:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:48.272 16:42:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:48.272 16:42:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:48.272 16:42:25 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:48.272 00:22:48.272 real 0m37.165s 00:22:48.272 user 1m6.612s 00:22:48.272 sys 0m11.061s 00:22:48.272 16:42:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:48.272 16:42:25 -- common/autotest_common.sh@10 -- # set +x 00:22:48.272 ************************************ 00:22:48.272 END TEST nvmf_digest 00:22:48.272 ************************************ 00:22:48.272 16:42:25 -- nvmf/nvmf.sh@110 -- # [[ 1 -eq 1 ]] 00:22:48.272 16:42:25 -- nvmf/nvmf.sh@110 -- # [[ tcp == \t\c\p ]] 00:22:48.272 16:42:25 -- nvmf/nvmf.sh@112 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:22:48.273 16:42:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:48.273 16:42:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:48.273 16:42:25 -- common/autotest_common.sh@10 -- # set +x 00:22:48.273 ************************************ 00:22:48.273 START TEST nvmf_mdns_discovery 00:22:48.273 ************************************ 00:22:48.273 16:42:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:22:48.273 * Looking for test storage... 00:22:48.273 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:48.273 16:42:25 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:22:48.273 16:42:25 -- common/autotest_common.sh@1690 -- # lcov --version 00:22:48.273 16:42:25 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:22:48.532 16:42:25 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:22:48.532 16:42:25 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:22:48.532 16:42:25 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:22:48.532 16:42:25 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:22:48.532 16:42:25 -- scripts/common.sh@335 -- # IFS=.-: 00:22:48.532 16:42:25 -- scripts/common.sh@335 -- # read -ra ver1 00:22:48.532 16:42:25 -- scripts/common.sh@336 -- # IFS=.-: 00:22:48.532 16:42:25 -- scripts/common.sh@336 -- # read -ra ver2 00:22:48.532 16:42:25 -- scripts/common.sh@337 -- # local 'op=<' 00:22:48.532 16:42:25 -- scripts/common.sh@339 -- # ver1_l=2 00:22:48.532 16:42:25 -- scripts/common.sh@340 -- # ver2_l=1 00:22:48.532 16:42:25 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:22:48.532 16:42:25 -- scripts/common.sh@343 -- # case "$op" in 00:22:48.532 16:42:25 -- scripts/common.sh@344 -- # : 1 00:22:48.532 16:42:25 -- scripts/common.sh@363 -- # (( v = 0 )) 00:22:48.532 16:42:25 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:48.532 16:42:25 -- scripts/common.sh@364 -- # decimal 1 00:22:48.532 16:42:25 -- scripts/common.sh@352 -- # local d=1 00:22:48.532 16:42:25 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:48.532 16:42:25 -- scripts/common.sh@354 -- # echo 1 00:22:48.532 16:42:25 -- scripts/common.sh@364 -- # ver1[v]=1 00:22:48.532 16:42:25 -- scripts/common.sh@365 -- # decimal 2 00:22:48.532 16:42:25 -- scripts/common.sh@352 -- # local d=2 00:22:48.532 16:42:25 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:48.532 16:42:25 -- scripts/common.sh@354 -- # echo 2 00:22:48.532 16:42:25 -- scripts/common.sh@365 -- # ver2[v]=2 00:22:48.532 16:42:25 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:48.532 16:42:25 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:22:48.532 16:42:25 -- scripts/common.sh@367 -- # return 0 00:22:48.532 16:42:25 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:48.532 16:42:25 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:22:48.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:48.532 --rc genhtml_branch_coverage=1 00:22:48.532 --rc genhtml_function_coverage=1 00:22:48.532 --rc genhtml_legend=1 00:22:48.532 --rc geninfo_all_blocks=1 00:22:48.532 --rc geninfo_unexecuted_blocks=1 00:22:48.532 00:22:48.532 ' 00:22:48.532 16:42:25 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:22:48.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:48.532 --rc genhtml_branch_coverage=1 00:22:48.532 --rc genhtml_function_coverage=1 00:22:48.532 --rc genhtml_legend=1 00:22:48.532 --rc geninfo_all_blocks=1 00:22:48.532 --rc geninfo_unexecuted_blocks=1 00:22:48.532 00:22:48.532 ' 00:22:48.532 16:42:25 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:22:48.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:48.532 --rc genhtml_branch_coverage=1 00:22:48.532 --rc genhtml_function_coverage=1 00:22:48.532 --rc genhtml_legend=1 00:22:48.532 --rc geninfo_all_blocks=1 00:22:48.532 --rc geninfo_unexecuted_blocks=1 00:22:48.532 00:22:48.532 ' 00:22:48.532 16:42:25 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:22:48.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:48.532 --rc genhtml_branch_coverage=1 00:22:48.532 --rc genhtml_function_coverage=1 00:22:48.532 --rc genhtml_legend=1 00:22:48.532 --rc geninfo_all_blocks=1 00:22:48.532 --rc geninfo_unexecuted_blocks=1 00:22:48.532 00:22:48.532 ' 00:22:48.532 16:42:25 -- host/mdns_discovery.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:48.532 16:42:25 -- nvmf/common.sh@7 -- # uname -s 00:22:48.532 16:42:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:48.532 16:42:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:48.532 16:42:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:48.532 16:42:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:48.532 16:42:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:48.532 16:42:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:48.532 16:42:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:48.532 16:42:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:48.532 16:42:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:48.532 16:42:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:48.532 16:42:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:22:48.532 16:42:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:22:48.532 16:42:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:48.532 16:42:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:48.532 16:42:25 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:48.532 16:42:25 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:48.532 16:42:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:48.532 16:42:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:48.532 16:42:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:48.532 16:42:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.532 16:42:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.532 16:42:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.532 16:42:25 -- paths/export.sh@5 -- # export PATH 00:22:48.532 16:42:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.532 16:42:25 -- nvmf/common.sh@46 -- # : 0 00:22:48.532 16:42:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:48.532 16:42:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:48.532 16:42:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:48.532 16:42:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:48.532 16:42:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:48.533 16:42:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:48.533 16:42:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:48.533 16:42:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:48.533 16:42:25 -- host/mdns_discovery.sh@12 -- # DISCOVERY_FILTER=address 00:22:48.533 16:42:25 -- host/mdns_discovery.sh@13 -- # DISCOVERY_PORT=8009 00:22:48.533 16:42:25 -- host/mdns_discovery.sh@14 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:22:48.533 16:42:25 -- host/mdns_discovery.sh@17 -- # NQN=nqn.2016-06.io.spdk:cnode 00:22:48.533 16:42:25 -- host/mdns_discovery.sh@18 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:22:48.533 16:42:25 -- host/mdns_discovery.sh@20 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:22:48.533 16:42:25 -- host/mdns_discovery.sh@21 -- # HOST_SOCK=/tmp/host.sock 00:22:48.533 16:42:25 -- host/mdns_discovery.sh@23 -- # nvmftestinit 00:22:48.533 16:42:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:48.533 16:42:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:48.533 16:42:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:48.533 16:42:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:48.533 16:42:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:48.533 16:42:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:48.533 16:42:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:48.533 16:42:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:48.533 16:42:25 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:22:48.533 16:42:25 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:22:48.533 16:42:25 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:22:48.533 16:42:25 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:22:48.533 16:42:25 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:22:48.533 16:42:25 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:22:48.533 16:42:25 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:48.533 16:42:25 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:48.533 16:42:25 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:48.533 16:42:25 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:22:48.533 16:42:25 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:48.533 16:42:25 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:48.533 16:42:25 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:48.533 16:42:25 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:48.533 16:42:25 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:48.533 16:42:25 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:48.533 16:42:25 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:48.533 16:42:25 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:48.533 16:42:25 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:22:48.533 16:42:25 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:22:48.533 Cannot find device "nvmf_tgt_br" 00:22:48.533 16:42:25 -- nvmf/common.sh@154 -- # true 00:22:48.533 16:42:25 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:22:48.533 Cannot find device "nvmf_tgt_br2" 00:22:48.533 16:42:25 -- nvmf/common.sh@155 -- # true 00:22:48.533 16:42:25 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:22:48.533 16:42:25 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:22:48.533 Cannot find device "nvmf_tgt_br" 00:22:48.533 16:42:25 -- nvmf/common.sh@157 -- # true 00:22:48.533 16:42:25 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:22:48.533 Cannot find device "nvmf_tgt_br2" 00:22:48.533 16:42:25 -- nvmf/common.sh@158 -- # true 00:22:48.533 16:42:25 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:22:48.533 16:42:25 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:22:48.533 16:42:25 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:48.533 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:48.533 16:42:25 -- nvmf/common.sh@161 -- # true 00:22:48.533 16:42:25 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:48.533 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:48.533 16:42:25 -- nvmf/common.sh@162 -- # true 00:22:48.533 16:42:25 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:22:48.533 16:42:25 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:48.533 16:42:25 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:48.533 16:42:25 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:48.533 16:42:25 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:48.533 16:42:26 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:48.792 16:42:26 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:48.792 16:42:26 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:48.792 16:42:26 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:48.792 16:42:26 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:22:48.792 16:42:26 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:22:48.792 16:42:26 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:22:48.792 16:42:26 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:22:48.792 16:42:26 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:48.792 16:42:26 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:48.792 16:42:26 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:48.792 16:42:26 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:22:48.792 16:42:26 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:22:48.792 16:42:26 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:22:48.792 16:42:26 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:48.792 16:42:26 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:48.792 16:42:26 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:48.792 16:42:26 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:48.792 16:42:26 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:48.792 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:48.792 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:22:48.792 00:22:48.792 --- 10.0.0.2 ping statistics --- 00:22:48.792 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:48.792 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:22:48.792 16:42:26 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:48.792 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:48.792 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:22:48.792 00:22:48.792 --- 10.0.0.3 ping statistics --- 00:22:48.792 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:48.792 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:22:48.792 16:42:26 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:48.792 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:48.792 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:22:48.792 00:22:48.792 --- 10.0.0.1 ping statistics --- 00:22:48.792 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:48.792 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:22:48.792 16:42:26 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:48.792 16:42:26 -- nvmf/common.sh@421 -- # return 0 00:22:48.792 16:42:26 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:48.792 16:42:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:48.792 16:42:26 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:48.792 16:42:26 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:48.792 16:42:26 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:48.792 16:42:26 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:48.792 16:42:26 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:48.792 16:42:26 -- host/mdns_discovery.sh@28 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:48.792 16:42:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:48.792 16:42:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:48.792 16:42:26 -- common/autotest_common.sh@10 -- # set +x 00:22:48.792 16:42:26 -- nvmf/common.sh@469 -- # nvmfpid=98510 00:22:48.792 16:42:26 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:48.792 16:42:26 -- nvmf/common.sh@470 -- # waitforlisten 98510 00:22:48.792 16:42:26 -- common/autotest_common.sh@829 -- # '[' -z 98510 ']' 00:22:48.792 16:42:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:48.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:48.792 16:42:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:48.792 16:42:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:48.792 16:42:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:48.792 16:42:26 -- common/autotest_common.sh@10 -- # set +x 00:22:48.792 [2024-11-16 16:42:26.236266] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:48.792 [2024-11-16 16:42:26.236360] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:49.051 [2024-11-16 16:42:26.380705] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:49.051 [2024-11-16 16:42:26.454815] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:49.051 [2024-11-16 16:42:26.454999] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:49.051 [2024-11-16 16:42:26.455018] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:49.051 [2024-11-16 16:42:26.455030] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:49.051 [2024-11-16 16:42:26.455085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:49.051 16:42:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:49.051 16:42:26 -- common/autotest_common.sh@862 -- # return 0 00:22:49.051 16:42:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:49.051 16:42:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:49.051 16:42:26 -- common/autotest_common.sh@10 -- # set +x 00:22:49.051 16:42:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:49.051 16:42:26 -- host/mdns_discovery.sh@30 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:22:49.051 16:42:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.051 16:42:26 -- common/autotest_common.sh@10 -- # set +x 00:22:49.310 16:42:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.310 16:42:26 -- host/mdns_discovery.sh@31 -- # rpc_cmd framework_start_init 00:22:49.310 16:42:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.310 16:42:26 -- common/autotest_common.sh@10 -- # set +x 00:22:49.310 16:42:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.310 16:42:26 -- host/mdns_discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:49.310 16:42:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.310 16:42:26 -- common/autotest_common.sh@10 -- # set +x 00:22:49.310 [2024-11-16 16:42:26.663035] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:49.310 16:42:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.310 16:42:26 -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:22:49.310 16:42:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.310 16:42:26 -- common/autotest_common.sh@10 -- # set +x 00:22:49.310 [2024-11-16 16:42:26.671182] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:49.310 16:42:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.310 16:42:26 -- host/mdns_discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:22:49.310 16:42:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.310 16:42:26 -- common/autotest_common.sh@10 -- # set +x 00:22:49.310 null0 00:22:49.310 16:42:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.310 16:42:26 -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:22:49.310 16:42:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.310 16:42:26 -- common/autotest_common.sh@10 -- # set +x 00:22:49.310 null1 00:22:49.310 16:42:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.310 16:42:26 -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null2 1000 512 00:22:49.310 16:42:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.310 16:42:26 -- common/autotest_common.sh@10 -- # set +x 00:22:49.310 null2 00:22:49.310 16:42:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.310 16:42:26 -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null3 1000 512 00:22:49.310 16:42:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.310 16:42:26 -- common/autotest_common.sh@10 -- # set +x 00:22:49.310 null3 00:22:49.310 16:42:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.310 16:42:26 -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_wait_for_examine 00:22:49.310 16:42:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.310 16:42:26 -- common/autotest_common.sh@10 -- # set +x 00:22:49.310 16:42:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.310 16:42:26 -- host/mdns_discovery.sh@47 -- # hostpid=98541 00:22:49.310 16:42:26 -- host/mdns_discovery.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:22:49.310 16:42:26 -- host/mdns_discovery.sh@48 -- # waitforlisten 98541 /tmp/host.sock 00:22:49.310 16:42:26 -- common/autotest_common.sh@829 -- # '[' -z 98541 ']' 00:22:49.310 16:42:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:22:49.310 16:42:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:49.310 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:49.310 16:42:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:49.310 16:42:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:49.310 16:42:26 -- common/autotest_common.sh@10 -- # set +x 00:22:49.310 [2024-11-16 16:42:26.774931] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:49.310 [2024-11-16 16:42:26.775027] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98541 ] 00:22:49.569 [2024-11-16 16:42:26.917841] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:49.569 [2024-11-16 16:42:26.999641] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:49.569 [2024-11-16 16:42:26.999849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:50.503 16:42:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:50.503 16:42:27 -- common/autotest_common.sh@862 -- # return 0 00:22:50.503 16:42:27 -- host/mdns_discovery.sh@50 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:22:50.504 16:42:27 -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahi_clientpid;kill $avahipid;' EXIT 00:22:50.504 16:42:27 -- host/mdns_discovery.sh@55 -- # avahi-daemon --kill 00:22:50.504 16:42:27 -- host/mdns_discovery.sh@57 -- # avahipid=98577 00:22:50.504 16:42:27 -- host/mdns_discovery.sh@58 -- # sleep 1 00:22:50.504 16:42:27 -- host/mdns_discovery.sh@56 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:22:50.504 16:42:27 -- host/mdns_discovery.sh@56 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:22:50.504 Process 1067 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:22:50.504 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:22:50.504 Successfully dropped root privileges. 00:22:50.504 avahi-daemon 0.8 starting up. 00:22:50.504 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:22:50.504 Successfully called chroot(). 00:22:50.504 Successfully dropped remaining capabilities. 00:22:50.504 No service file found in /etc/avahi/services. 00:22:51.438 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:22:51.438 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:22:51.438 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:22:51.438 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:22:51.438 Network interface enumeration completed. 00:22:51.438 Registering new address record for fe80::6084:d4ff:fe9b:2260 on nvmf_tgt_if2.*. 00:22:51.438 Registering new address record for 10.0.0.3 on nvmf_tgt_if2.IPv4. 00:22:51.438 Registering new address record for fe80::3c47:c4ff:feac:c7a5 on nvmf_tgt_if.*. 00:22:51.438 Registering new address record for 10.0.0.2 on nvmf_tgt_if.IPv4. 00:22:51.438 Server startup complete. Host name is fedora39-cloud-1721788873-2326.local. Local service cookie is 2420643624. 00:22:51.438 16:42:28 -- host/mdns_discovery.sh@60 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:22:51.438 16:42:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.438 16:42:28 -- common/autotest_common.sh@10 -- # set +x 00:22:51.438 16:42:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.438 16:42:28 -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:22:51.438 16:42:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.438 16:42:28 -- common/autotest_common.sh@10 -- # set +x 00:22:51.438 16:42:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.438 16:42:28 -- host/mdns_discovery.sh@85 -- # notify_id=0 00:22:51.438 16:42:28 -- host/mdns_discovery.sh@91 -- # get_subsystem_names 00:22:51.438 16:42:28 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:51.438 16:42:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.438 16:42:28 -- common/autotest_common.sh@10 -- # set +x 00:22:51.438 16:42:28 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:22:51.438 16:42:28 -- host/mdns_discovery.sh@68 -- # sort 00:22:51.438 16:42:28 -- host/mdns_discovery.sh@68 -- # xargs 00:22:51.697 16:42:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.697 16:42:28 -- host/mdns_discovery.sh@91 -- # [[ '' == '' ]] 00:22:51.697 16:42:28 -- host/mdns_discovery.sh@92 -- # get_bdev_list 00:22:51.697 16:42:28 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:51.697 16:42:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.697 16:42:28 -- common/autotest_common.sh@10 -- # set +x 00:22:51.697 16:42:28 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:22:51.697 16:42:28 -- host/mdns_discovery.sh@64 -- # sort 00:22:51.697 16:42:28 -- host/mdns_discovery.sh@64 -- # xargs 00:22:51.697 16:42:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.697 16:42:29 -- host/mdns_discovery.sh@92 -- # [[ '' == '' ]] 00:22:51.697 16:42:29 -- host/mdns_discovery.sh@94 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:22:51.697 16:42:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.697 16:42:29 -- common/autotest_common.sh@10 -- # set +x 00:22:51.697 16:42:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.697 16:42:29 -- host/mdns_discovery.sh@95 -- # get_subsystem_names 00:22:51.697 16:42:29 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:22:51.697 16:42:29 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:51.697 16:42:29 -- host/mdns_discovery.sh@68 -- # sort 00:22:51.697 16:42:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.697 16:42:29 -- common/autotest_common.sh@10 -- # set +x 00:22:51.697 16:42:29 -- host/mdns_discovery.sh@68 -- # xargs 00:22:51.697 16:42:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.697 16:42:29 -- host/mdns_discovery.sh@95 -- # [[ '' == '' ]] 00:22:51.697 16:42:29 -- host/mdns_discovery.sh@96 -- # get_bdev_list 00:22:51.697 16:42:29 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:51.697 16:42:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.697 16:42:29 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:22:51.697 16:42:29 -- common/autotest_common.sh@10 -- # set +x 00:22:51.697 16:42:29 -- host/mdns_discovery.sh@64 -- # sort 00:22:51.697 16:42:29 -- host/mdns_discovery.sh@64 -- # xargs 00:22:51.697 16:42:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.697 16:42:29 -- host/mdns_discovery.sh@96 -- # [[ '' == '' ]] 00:22:51.697 16:42:29 -- host/mdns_discovery.sh@98 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:22:51.697 16:42:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.697 16:42:29 -- common/autotest_common.sh@10 -- # set +x 00:22:51.697 16:42:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.697 16:42:29 -- host/mdns_discovery.sh@99 -- # get_subsystem_names 00:22:51.697 16:42:29 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:51.697 16:42:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.697 16:42:29 -- host/mdns_discovery.sh@68 -- # sort 00:22:51.697 16:42:29 -- common/autotest_common.sh@10 -- # set +x 00:22:51.697 16:42:29 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:22:51.697 16:42:29 -- host/mdns_discovery.sh@68 -- # xargs 00:22:51.697 16:42:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.961 16:42:29 -- host/mdns_discovery.sh@99 -- # [[ '' == '' ]] 00:22:51.961 16:42:29 -- host/mdns_discovery.sh@100 -- # get_bdev_list 00:22:51.961 16:42:29 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:51.961 16:42:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.961 16:42:29 -- common/autotest_common.sh@10 -- # set +x 00:22:51.961 16:42:29 -- host/mdns_discovery.sh@64 -- # sort 00:22:51.961 16:42:29 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:22:51.961 16:42:29 -- host/mdns_discovery.sh@64 -- # xargs 00:22:51.962 [2024-11-16 16:42:29.216257] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:22:51.962 16:42:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.962 16:42:29 -- host/mdns_discovery.sh@100 -- # [[ '' == '' ]] 00:22:51.962 16:42:29 -- host/mdns_discovery.sh@104 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:51.962 16:42:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.962 16:42:29 -- common/autotest_common.sh@10 -- # set +x 00:22:51.962 [2024-11-16 16:42:29.263742] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:51.962 16:42:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.962 16:42:29 -- host/mdns_discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:22:51.962 16:42:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.962 16:42:29 -- common/autotest_common.sh@10 -- # set +x 00:22:51.962 16:42:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.962 16:42:29 -- host/mdns_discovery.sh@111 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:22:51.962 16:42:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.962 16:42:29 -- common/autotest_common.sh@10 -- # set +x 00:22:51.962 16:42:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.962 16:42:29 -- host/mdns_discovery.sh@112 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:22:51.962 16:42:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.962 16:42:29 -- common/autotest_common.sh@10 -- # set +x 00:22:51.962 16:42:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.962 16:42:29 -- host/mdns_discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:22:51.962 16:42:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.962 16:42:29 -- common/autotest_common.sh@10 -- # set +x 00:22:51.963 16:42:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.963 16:42:29 -- host/mdns_discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:22:51.963 16:42:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.963 16:42:29 -- common/autotest_common.sh@10 -- # set +x 00:22:51.963 [2024-11-16 16:42:29.303723] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:22:51.963 16:42:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.963 16:42:29 -- host/mdns_discovery.sh@120 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:22:51.963 16:42:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.963 16:42:29 -- common/autotest_common.sh@10 -- # set +x 00:22:51.963 [2024-11-16 16:42:29.311718] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:51.963 16:42:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.963 16:42:29 -- host/mdns_discovery.sh@124 -- # avahi_clientpid=98628 00:22:51.963 16:42:29 -- host/mdns_discovery.sh@125 -- # sleep 5 00:22:51.963 16:42:29 -- host/mdns_discovery.sh@123 -- # ip netns exec nvmf_tgt_ns_spdk /usr/bin/avahi-publish --domain=local --service CDC _nvme-disc._tcp 8009 NQN=nqn.2014-08.org.nvmexpress.discovery p=tcp 00:22:52.900 [2024-11-16 16:42:30.116265] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:22:52.900 Established under name 'CDC' 00:22:53.158 [2024-11-16 16:42:30.516272] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:22:53.158 [2024-11-16 16:42:30.516296] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:22:53.158 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:22:53.158 cookie is 0 00:22:53.158 is_local: 1 00:22:53.158 our_own: 0 00:22:53.158 wide_area: 0 00:22:53.158 multicast: 1 00:22:53.158 cached: 1 00:22:53.158 [2024-11-16 16:42:30.616265] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:22:53.158 [2024-11-16 16:42:30.616285] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.2) 00:22:53.158 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:22:53.158 cookie is 0 00:22:53.158 is_local: 1 00:22:53.158 our_own: 0 00:22:53.158 wide_area: 0 00:22:53.158 multicast: 1 00:22:53.158 cached: 1 00:22:54.092 [2024-11-16 16:42:31.527161] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:22:54.092 [2024-11-16 16:42:31.527189] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:22:54.092 [2024-11-16 16:42:31.527207] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:54.351 [2024-11-16 16:42:31.613279] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 new subsystem mdns0_nvme0 00:22:54.351 [2024-11-16 16:42:31.626863] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:54.351 [2024-11-16 16:42:31.626882] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:54.351 [2024-11-16 16:42:31.626901] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:54.351 [2024-11-16 16:42:31.674518] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:22:54.351 [2024-11-16 16:42:31.674543] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:22:54.351 [2024-11-16 16:42:31.712692] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0 00:22:54.351 [2024-11-16 16:42:31.767243] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:22:54.351 [2024-11-16 16:42:31.767267] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:56.880 16:42:34 -- host/mdns_discovery.sh@127 -- # get_mdns_discovery_svcs 00:22:56.880 16:42:34 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:22:56.880 16:42:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.880 16:42:34 -- common/autotest_common.sh@10 -- # set +x 00:22:56.880 16:42:34 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:22:56.880 16:42:34 -- host/mdns_discovery.sh@80 -- # sort 00:22:56.880 16:42:34 -- host/mdns_discovery.sh@80 -- # xargs 00:22:56.880 16:42:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.139 16:42:34 -- host/mdns_discovery.sh@127 -- # [[ mdns == \m\d\n\s ]] 00:22:57.139 16:42:34 -- host/mdns_discovery.sh@128 -- # get_discovery_ctrlrs 00:22:57.139 16:42:34 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:57.139 16:42:34 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:22:57.139 16:42:34 -- host/mdns_discovery.sh@76 -- # sort 00:22:57.139 16:42:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.139 16:42:34 -- common/autotest_common.sh@10 -- # set +x 00:22:57.139 16:42:34 -- host/mdns_discovery.sh@76 -- # xargs 00:22:57.139 16:42:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.139 16:42:34 -- host/mdns_discovery.sh@128 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:22:57.139 16:42:34 -- host/mdns_discovery.sh@129 -- # get_subsystem_names 00:22:57.139 16:42:34 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:57.139 16:42:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.139 16:42:34 -- common/autotest_common.sh@10 -- # set +x 00:22:57.139 16:42:34 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:22:57.139 16:42:34 -- host/mdns_discovery.sh@68 -- # sort 00:22:57.139 16:42:34 -- host/mdns_discovery.sh@68 -- # xargs 00:22:57.139 16:42:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.139 16:42:34 -- host/mdns_discovery.sh@129 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:22:57.139 16:42:34 -- host/mdns_discovery.sh@130 -- # get_bdev_list 00:22:57.139 16:42:34 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:57.139 16:42:34 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:22:57.139 16:42:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.139 16:42:34 -- common/autotest_common.sh@10 -- # set +x 00:22:57.139 16:42:34 -- host/mdns_discovery.sh@64 -- # sort 00:22:57.139 16:42:34 -- host/mdns_discovery.sh@64 -- # xargs 00:22:57.139 16:42:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.139 16:42:34 -- host/mdns_discovery.sh@130 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:22:57.139 16:42:34 -- host/mdns_discovery.sh@131 -- # get_subsystem_paths mdns0_nvme0 00:22:57.139 16:42:34 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:22:57.139 16:42:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.139 16:42:34 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:57.139 16:42:34 -- common/autotest_common.sh@10 -- # set +x 00:22:57.139 16:42:34 -- host/mdns_discovery.sh@72 -- # sort -n 00:22:57.139 16:42:34 -- host/mdns_discovery.sh@72 -- # xargs 00:22:57.139 16:42:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.139 16:42:34 -- host/mdns_discovery.sh@131 -- # [[ 4420 == \4\4\2\0 ]] 00:22:57.139 16:42:34 -- host/mdns_discovery.sh@132 -- # get_subsystem_paths mdns1_nvme0 00:22:57.139 16:42:34 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:22:57.139 16:42:34 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:57.139 16:42:34 -- host/mdns_discovery.sh@72 -- # sort -n 00:22:57.139 16:42:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.139 16:42:34 -- common/autotest_common.sh@10 -- # set +x 00:22:57.139 16:42:34 -- host/mdns_discovery.sh@72 -- # xargs 00:22:57.139 16:42:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.139 16:42:34 -- host/mdns_discovery.sh@132 -- # [[ 4420 == \4\4\2\0 ]] 00:22:57.139 16:42:34 -- host/mdns_discovery.sh@133 -- # get_notification_count 00:22:57.398 16:42:34 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:57.398 16:42:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.398 16:42:34 -- common/autotest_common.sh@10 -- # set +x 00:22:57.398 16:42:34 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:22:57.398 16:42:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.398 16:42:34 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:22:57.398 16:42:34 -- host/mdns_discovery.sh@88 -- # notify_id=2 00:22:57.398 16:42:34 -- host/mdns_discovery.sh@134 -- # [[ 2 == 2 ]] 00:22:57.398 16:42:34 -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:22:57.398 16:42:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.398 16:42:34 -- common/autotest_common.sh@10 -- # set +x 00:22:57.398 16:42:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.398 16:42:34 -- host/mdns_discovery.sh@138 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:22:57.398 16:42:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.398 16:42:34 -- common/autotest_common.sh@10 -- # set +x 00:22:57.398 16:42:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.398 16:42:34 -- host/mdns_discovery.sh@139 -- # sleep 1 00:22:58.332 16:42:35 -- host/mdns_discovery.sh@141 -- # get_bdev_list 00:22:58.332 16:42:35 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:58.332 16:42:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.332 16:42:35 -- common/autotest_common.sh@10 -- # set +x 00:22:58.332 16:42:35 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:22:58.332 16:42:35 -- host/mdns_discovery.sh@64 -- # sort 00:22:58.332 16:42:35 -- host/mdns_discovery.sh@64 -- # xargs 00:22:58.332 16:42:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.332 16:42:35 -- host/mdns_discovery.sh@141 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:22:58.332 16:42:35 -- host/mdns_discovery.sh@142 -- # get_notification_count 00:22:58.332 16:42:35 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:58.332 16:42:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.332 16:42:35 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:22:58.332 16:42:35 -- common/autotest_common.sh@10 -- # set +x 00:22:58.332 16:42:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.332 16:42:35 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:22:58.332 16:42:35 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:22:58.332 16:42:35 -- host/mdns_discovery.sh@143 -- # [[ 2 == 2 ]] 00:22:58.332 16:42:35 -- host/mdns_discovery.sh@147 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:22:58.332 16:42:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.332 16:42:35 -- common/autotest_common.sh@10 -- # set +x 00:22:58.332 [2024-11-16 16:42:35.803369] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:58.332 [2024-11-16 16:42:35.803617] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:58.332 [2024-11-16 16:42:35.803638] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:58.332 [2024-11-16 16:42:35.803664] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:22:58.332 [2024-11-16 16:42:35.803675] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:58.332 16:42:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.332 16:42:35 -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4421 00:22:58.332 16:42:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.332 16:42:35 -- common/autotest_common.sh@10 -- # set +x 00:22:58.332 [2024-11-16 16:42:35.811333] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:22:58.332 [2024-11-16 16:42:35.811631] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:58.332 [2024-11-16 16:42:35.811673] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:22:58.332 16:42:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.332 16:42:35 -- host/mdns_discovery.sh@149 -- # sleep 1 00:22:58.591 [2024-11-16 16:42:35.942708] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for mdns1_nvme0 00:22:58.591 [2024-11-16 16:42:35.942841] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new path for mdns0_nvme0 00:22:58.591 [2024-11-16 16:42:35.999881] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:22:58.591 [2024-11-16 16:42:35.999903] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:58.591 [2024-11-16 16:42:35.999909] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:58.591 [2024-11-16 16:42:35.999923] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:58.591 [2024-11-16 16:42:35.999993] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:22:58.591 [2024-11-16 16:42:36.000002] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:22:58.591 [2024-11-16 16:42:36.000007] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:22:58.591 [2024-11-16 16:42:36.000019] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:58.591 [2024-11-16 16:42:36.045796] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:58.591 [2024-11-16 16:42:36.045815] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:58.591 [2024-11-16 16:42:36.045848] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:22:58.591 [2024-11-16 16:42:36.045856] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:22:59.528 16:42:36 -- host/mdns_discovery.sh@151 -- # get_subsystem_names 00:22:59.528 16:42:36 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:22:59.528 16:42:36 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:59.528 16:42:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.528 16:42:36 -- common/autotest_common.sh@10 -- # set +x 00:22:59.528 16:42:36 -- host/mdns_discovery.sh@68 -- # sort 00:22:59.528 16:42:36 -- host/mdns_discovery.sh@68 -- # xargs 00:22:59.528 16:42:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.528 16:42:36 -- host/mdns_discovery.sh@151 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:22:59.528 16:42:36 -- host/mdns_discovery.sh@152 -- # get_bdev_list 00:22:59.528 16:42:36 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:22:59.528 16:42:36 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:59.528 16:42:36 -- host/mdns_discovery.sh@64 -- # sort 00:22:59.528 16:42:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.528 16:42:36 -- host/mdns_discovery.sh@64 -- # xargs 00:22:59.528 16:42:36 -- common/autotest_common.sh@10 -- # set +x 00:22:59.528 16:42:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.528 16:42:36 -- host/mdns_discovery.sh@152 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:22:59.528 16:42:36 -- host/mdns_discovery.sh@153 -- # get_subsystem_paths mdns0_nvme0 00:22:59.528 16:42:36 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:22:59.528 16:42:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.528 16:42:36 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:59.528 16:42:36 -- common/autotest_common.sh@10 -- # set +x 00:22:59.528 16:42:36 -- host/mdns_discovery.sh@72 -- # sort -n 00:22:59.528 16:42:36 -- host/mdns_discovery.sh@72 -- # xargs 00:22:59.528 16:42:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.528 16:42:36 -- host/mdns_discovery.sh@153 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:22:59.528 16:42:36 -- host/mdns_discovery.sh@154 -- # get_subsystem_paths mdns1_nvme0 00:22:59.528 16:42:36 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:22:59.528 16:42:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.528 16:42:36 -- common/autotest_common.sh@10 -- # set +x 00:22:59.528 16:42:36 -- host/mdns_discovery.sh@72 -- # sort -n 00:22:59.528 16:42:36 -- host/mdns_discovery.sh@72 -- # xargs 00:22:59.528 16:42:36 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:59.528 16:42:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.789 16:42:37 -- host/mdns_discovery.sh@154 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:22:59.789 16:42:37 -- host/mdns_discovery.sh@155 -- # get_notification_count 00:22:59.789 16:42:37 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:22:59.789 16:42:37 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:22:59.789 16:42:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.789 16:42:37 -- common/autotest_common.sh@10 -- # set +x 00:22:59.789 16:42:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.789 16:42:37 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:22:59.789 16:42:37 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:22:59.789 16:42:37 -- host/mdns_discovery.sh@156 -- # [[ 0 == 0 ]] 00:22:59.789 16:42:37 -- host/mdns_discovery.sh@160 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:59.789 16:42:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.789 16:42:37 -- common/autotest_common.sh@10 -- # set +x 00:22:59.789 [2024-11-16 16:42:37.076633] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:59.789 [2024-11-16 16:42:37.076660] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:59.789 [2024-11-16 16:42:37.076689] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:22:59.789 [2024-11-16 16:42:37.076700] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:59.789 [2024-11-16 16:42:37.077278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.790 [2024-11-16 16:42:37.077320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.790 [2024-11-16 16:42:37.077331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.790 [2024-11-16 16:42:37.077339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.790 [2024-11-16 16:42:37.077348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.790 [2024-11-16 16:42:37.077356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.790 [2024-11-16 16:42:37.077365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.790 [2024-11-16 16:42:37.077373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.790 [2024-11-16 16:42:37.077381] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7aa0 is same with the state(5) to be set 00:22:59.790 16:42:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.790 16:42:37 -- host/mdns_discovery.sh@161 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:22:59.790 16:42:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.790 16:42:37 -- common/autotest_common.sh@10 -- # set +x 00:22:59.790 [2024-11-16 16:42:37.084649] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:59.790 [2024-11-16 16:42:37.084710] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:22:59.790 [2024-11-16 16:42:37.087225] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7aa0 (9): Bad file descriptor 00:22:59.790 16:42:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.790 16:42:37 -- host/mdns_discovery.sh@162 -- # sleep 1 00:22:59.790 [2024-11-16 16:42:37.093258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.790 [2024-11-16 16:42:37.093285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.790 [2024-11-16 16:42:37.093297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.790 [2024-11-16 16:42:37.093305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.790 [2024-11-16 16:42:37.093313] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.790 [2024-11-16 16:42:37.093321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.790 [2024-11-16 16:42:37.093329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.790 [2024-11-16 16:42:37.093337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.790 [2024-11-16 16:42:37.093345] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2760 is same with the state(5) to be set 00:22:59.790 [2024-11-16 16:42:37.097243] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:59.790 [2024-11-16 16:42:37.097326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.790 [2024-11-16 16:42:37.097367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.790 [2024-11-16 16:42:37.097382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7aa0 with addr=10.0.0.2, port=4420 00:22:59.790 [2024-11-16 16:42:37.097391] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7aa0 is same with the state(5) to be set 00:22:59.790 [2024-11-16 16:42:37.097405] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7aa0 (9): Bad file descriptor 00:22:59.790 [2024-11-16 16:42:37.097417] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:59.790 [2024-11-16 16:42:37.097425] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:59.790 [2024-11-16 16:42:37.097434] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:59.790 [2024-11-16 16:42:37.097448] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:59.790 [2024-11-16 16:42:37.103227] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeb2760 (9): Bad file descriptor 00:22:59.790 [2024-11-16 16:42:37.107290] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:59.790 [2024-11-16 16:42:37.107362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.790 [2024-11-16 16:42:37.107401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.790 [2024-11-16 16:42:37.107416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7aa0 with addr=10.0.0.2, port=4420 00:22:59.790 [2024-11-16 16:42:37.107425] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7aa0 is same with the state(5) to be set 00:22:59.790 [2024-11-16 16:42:37.107439] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7aa0 (9): Bad file descriptor 00:22:59.790 [2024-11-16 16:42:37.107451] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:59.790 [2024-11-16 16:42:37.107459] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:59.790 [2024-11-16 16:42:37.107467] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:59.790 [2024-11-16 16:42:37.107479] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:59.790 [2024-11-16 16:42:37.113235] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:59.790 [2024-11-16 16:42:37.113304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.790 [2024-11-16 16:42:37.113345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.790 [2024-11-16 16:42:37.113358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeb2760 with addr=10.0.0.3, port=4420 00:22:59.790 [2024-11-16 16:42:37.113368] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2760 is same with the state(5) to be set 00:22:59.790 [2024-11-16 16:42:37.113381] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeb2760 (9): Bad file descriptor 00:22:59.790 [2024-11-16 16:42:37.113393] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:59.790 [2024-11-16 16:42:37.113400] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:59.790 [2024-11-16 16:42:37.113408] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:59.790 [2024-11-16 16:42:37.113421] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:59.790 [2024-11-16 16:42:37.117332] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:59.790 [2024-11-16 16:42:37.117398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.790 [2024-11-16 16:42:37.117437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.790 [2024-11-16 16:42:37.117451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7aa0 with addr=10.0.0.2, port=4420 00:22:59.790 [2024-11-16 16:42:37.117459] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7aa0 is same with the state(5) to be set 00:22:59.790 [2024-11-16 16:42:37.117472] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7aa0 (9): Bad file descriptor 00:22:59.790 [2024-11-16 16:42:37.117492] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:59.790 [2024-11-16 16:42:37.117502] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:59.790 [2024-11-16 16:42:37.117510] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:59.790 [2024-11-16 16:42:37.117522] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:59.790 [2024-11-16 16:42:37.123279] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:59.790 [2024-11-16 16:42:37.123344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.790 [2024-11-16 16:42:37.123383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.790 [2024-11-16 16:42:37.123397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeb2760 with addr=10.0.0.3, port=4420 00:22:59.790 [2024-11-16 16:42:37.123406] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2760 is same with the state(5) to be set 00:22:59.790 [2024-11-16 16:42:37.123420] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeb2760 (9): Bad file descriptor 00:22:59.790 [2024-11-16 16:42:37.123432] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:59.790 [2024-11-16 16:42:37.123440] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:59.790 [2024-11-16 16:42:37.123447] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:59.790 [2024-11-16 16:42:37.123460] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:59.790 [2024-11-16 16:42:37.127375] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:59.790 [2024-11-16 16:42:37.127444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.790 [2024-11-16 16:42:37.127483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.790 [2024-11-16 16:42:37.127497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7aa0 with addr=10.0.0.2, port=4420 00:22:59.790 [2024-11-16 16:42:37.127506] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7aa0 is same with the state(5) to be set 00:22:59.790 [2024-11-16 16:42:37.127520] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7aa0 (9): Bad file descriptor 00:22:59.790 [2024-11-16 16:42:37.127531] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:59.790 [2024-11-16 16:42:37.127538] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:59.790 [2024-11-16 16:42:37.127546] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:59.790 [2024-11-16 16:42:37.127558] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:59.790 [2024-11-16 16:42:37.133322] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:59.790 [2024-11-16 16:42:37.133397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.790 [2024-11-16 16:42:37.133438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.790 [2024-11-16 16:42:37.133452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeb2760 with addr=10.0.0.3, port=4420 00:22:59.790 [2024-11-16 16:42:37.133461] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2760 is same with the state(5) to be set 00:22:59.791 [2024-11-16 16:42:37.133475] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeb2760 (9): Bad file descriptor 00:22:59.791 [2024-11-16 16:42:37.133496] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:59.791 [2024-11-16 16:42:37.133505] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:59.791 [2024-11-16 16:42:37.133512] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:59.791 [2024-11-16 16:42:37.133525] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:59.791 [2024-11-16 16:42:37.137419] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:59.791 [2024-11-16 16:42:37.137488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.791 [2024-11-16 16:42:37.137529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.791 [2024-11-16 16:42:37.137558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7aa0 with addr=10.0.0.2, port=4420 00:22:59.791 [2024-11-16 16:42:37.137567] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7aa0 is same with the state(5) to be set 00:22:59.791 [2024-11-16 16:42:37.137580] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7aa0 (9): Bad file descriptor 00:22:59.791 [2024-11-16 16:42:37.137600] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:59.791 [2024-11-16 16:42:37.137609] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:59.791 [2024-11-16 16:42:37.137617] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:59.791 [2024-11-16 16:42:37.137629] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:59.791 [2024-11-16 16:42:37.143370] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:59.791 [2024-11-16 16:42:37.143438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.791 [2024-11-16 16:42:37.143477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.791 [2024-11-16 16:42:37.143491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeb2760 with addr=10.0.0.3, port=4420 00:22:59.791 [2024-11-16 16:42:37.143499] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2760 is same with the state(5) to be set 00:22:59.791 [2024-11-16 16:42:37.143513] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeb2760 (9): Bad file descriptor 00:22:59.791 [2024-11-16 16:42:37.143525] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:59.791 [2024-11-16 16:42:37.143533] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:59.791 [2024-11-16 16:42:37.143541] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:59.791 [2024-11-16 16:42:37.143553] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:59.791 [2024-11-16 16:42:37.147463] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:59.791 [2024-11-16 16:42:37.147529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.791 [2024-11-16 16:42:37.147567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.791 [2024-11-16 16:42:37.147581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7aa0 with addr=10.0.0.2, port=4420 00:22:59.791 [2024-11-16 16:42:37.147590] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7aa0 is same with the state(5) to be set 00:22:59.791 [2024-11-16 16:42:37.147604] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7aa0 (9): Bad file descriptor 00:22:59.791 [2024-11-16 16:42:37.147615] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:59.791 [2024-11-16 16:42:37.147623] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:59.791 [2024-11-16 16:42:37.147631] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:59.791 [2024-11-16 16:42:37.147643] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:59.791 [2024-11-16 16:42:37.153413] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:59.791 [2024-11-16 16:42:37.153479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.791 [2024-11-16 16:42:37.153517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.791 [2024-11-16 16:42:37.153531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeb2760 with addr=10.0.0.3, port=4420 00:22:59.791 [2024-11-16 16:42:37.153541] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2760 is same with the state(5) to be set 00:22:59.791 [2024-11-16 16:42:37.153554] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeb2760 (9): Bad file descriptor 00:22:59.791 [2024-11-16 16:42:37.153566] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:59.791 [2024-11-16 16:42:37.153573] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:59.791 [2024-11-16 16:42:37.153581] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:59.791 [2024-11-16 16:42:37.153593] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:59.791 [2024-11-16 16:42:37.157507] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:59.791 [2024-11-16 16:42:37.157583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.791 [2024-11-16 16:42:37.157621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.791 [2024-11-16 16:42:37.157635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7aa0 with addr=10.0.0.2, port=4420 00:22:59.791 [2024-11-16 16:42:37.157644] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7aa0 is same with the state(5) to be set 00:22:59.791 [2024-11-16 16:42:37.157658] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7aa0 (9): Bad file descriptor 00:22:59.791 [2024-11-16 16:42:37.157679] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:59.791 [2024-11-16 16:42:37.157687] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:59.791 [2024-11-16 16:42:37.157695] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:59.791 [2024-11-16 16:42:37.157720] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:59.791 [2024-11-16 16:42:37.163457] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:59.791 [2024-11-16 16:42:37.163522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.791 [2024-11-16 16:42:37.163560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.791 [2024-11-16 16:42:37.163574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeb2760 with addr=10.0.0.3, port=4420 00:22:59.791 [2024-11-16 16:42:37.163583] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2760 is same with the state(5) to be set 00:22:59.791 [2024-11-16 16:42:37.163596] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeb2760 (9): Bad file descriptor 00:22:59.791 [2024-11-16 16:42:37.163608] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:59.791 [2024-11-16 16:42:37.163615] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:59.791 [2024-11-16 16:42:37.163623] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:59.791 [2024-11-16 16:42:37.163635] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:59.791 [2024-11-16 16:42:37.167561] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:59.791 [2024-11-16 16:42:37.167627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.791 [2024-11-16 16:42:37.167665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.791 [2024-11-16 16:42:37.167679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7aa0 with addr=10.0.0.2, port=4420 00:22:59.791 [2024-11-16 16:42:37.167688] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7aa0 is same with the state(5) to be set 00:22:59.791 [2024-11-16 16:42:37.167702] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7aa0 (9): Bad file descriptor 00:22:59.791 [2024-11-16 16:42:37.167727] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:59.791 [2024-11-16 16:42:37.167737] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:59.791 [2024-11-16 16:42:37.167745] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:59.791 [2024-11-16 16:42:37.167756] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:59.791 [2024-11-16 16:42:37.173499] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:59.791 [2024-11-16 16:42:37.173583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.791 [2024-11-16 16:42:37.173625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.791 [2024-11-16 16:42:37.173639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeb2760 with addr=10.0.0.3, port=4420 00:22:59.791 [2024-11-16 16:42:37.173649] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2760 is same with the state(5) to be set 00:22:59.791 [2024-11-16 16:42:37.173663] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeb2760 (9): Bad file descriptor 00:22:59.791 [2024-11-16 16:42:37.173685] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:59.791 [2024-11-16 16:42:37.173694] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:59.791 [2024-11-16 16:42:37.173701] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:59.791 [2024-11-16 16:42:37.173714] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:59.791 [2024-11-16 16:42:37.177604] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:59.791 [2024-11-16 16:42:37.177681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.791 [2024-11-16 16:42:37.177720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.791 [2024-11-16 16:42:37.177734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7aa0 with addr=10.0.0.2, port=4420 00:22:59.791 [2024-11-16 16:42:37.177744] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7aa0 is same with the state(5) to be set 00:22:59.791 [2024-11-16 16:42:37.177757] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7aa0 (9): Bad file descriptor 00:22:59.791 [2024-11-16 16:42:37.177791] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:59.791 [2024-11-16 16:42:37.177802] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:59.792 [2024-11-16 16:42:37.177809] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:59.792 [2024-11-16 16:42:37.177821] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:59.792 [2024-11-16 16:42:37.183546] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:59.792 [2024-11-16 16:42:37.183613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.792 [2024-11-16 16:42:37.183652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.792 [2024-11-16 16:42:37.183667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeb2760 with addr=10.0.0.3, port=4420 00:22:59.792 [2024-11-16 16:42:37.183676] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2760 is same with the state(5) to be set 00:22:59.792 [2024-11-16 16:42:37.183689] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeb2760 (9): Bad file descriptor 00:22:59.792 [2024-11-16 16:42:37.183701] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:59.792 [2024-11-16 16:42:37.183709] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:59.792 [2024-11-16 16:42:37.183716] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:59.792 [2024-11-16 16:42:37.183729] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:59.792 [2024-11-16 16:42:37.187649] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:59.792 [2024-11-16 16:42:37.187715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.792 [2024-11-16 16:42:37.187753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.792 [2024-11-16 16:42:37.187767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7aa0 with addr=10.0.0.2, port=4420 00:22:59.792 [2024-11-16 16:42:37.187776] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7aa0 is same with the state(5) to be set 00:22:59.792 [2024-11-16 16:42:37.187790] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7aa0 (9): Bad file descriptor 00:22:59.792 [2024-11-16 16:42:37.187816] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:59.792 [2024-11-16 16:42:37.187826] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:59.792 [2024-11-16 16:42:37.187833] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:59.792 [2024-11-16 16:42:37.187845] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:59.792 [2024-11-16 16:42:37.193588] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:59.792 [2024-11-16 16:42:37.193653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.792 [2024-11-16 16:42:37.193692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.792 [2024-11-16 16:42:37.193706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeb2760 with addr=10.0.0.3, port=4420 00:22:59.792 [2024-11-16 16:42:37.193715] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2760 is same with the state(5) to be set 00:22:59.792 [2024-11-16 16:42:37.193728] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeb2760 (9): Bad file descriptor 00:22:59.792 [2024-11-16 16:42:37.193740] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:59.792 [2024-11-16 16:42:37.193748] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:59.792 [2024-11-16 16:42:37.193755] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:59.792 [2024-11-16 16:42:37.193768] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:59.792 [2024-11-16 16:42:37.197691] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:59.792 [2024-11-16 16:42:37.197757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.792 [2024-11-16 16:42:37.197796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.792 [2024-11-16 16:42:37.197809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7aa0 with addr=10.0.0.2, port=4420 00:22:59.792 [2024-11-16 16:42:37.197819] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7aa0 is same with the state(5) to be set 00:22:59.792 [2024-11-16 16:42:37.197831] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7aa0 (9): Bad file descriptor 00:22:59.792 [2024-11-16 16:42:37.197864] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:59.792 [2024-11-16 16:42:37.197875] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:59.792 [2024-11-16 16:42:37.197883] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:59.792 [2024-11-16 16:42:37.197895] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:59.792 [2024-11-16 16:42:37.203631] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:59.792 [2024-11-16 16:42:37.203698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.792 [2024-11-16 16:42:37.203736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.792 [2024-11-16 16:42:37.203750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeb2760 with addr=10.0.0.3, port=4420 00:22:59.792 [2024-11-16 16:42:37.203761] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2760 is same with the state(5) to be set 00:22:59.792 [2024-11-16 16:42:37.203774] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeb2760 (9): Bad file descriptor 00:22:59.792 [2024-11-16 16:42:37.203786] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:59.792 [2024-11-16 16:42:37.203793] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:59.792 [2024-11-16 16:42:37.203803] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:59.792 [2024-11-16 16:42:37.203815] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:59.792 [2024-11-16 16:42:37.207733] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:59.792 [2024-11-16 16:42:37.207828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.792 [2024-11-16 16:42:37.207868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.792 [2024-11-16 16:42:37.207882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec7aa0 with addr=10.0.0.2, port=4420 00:22:59.792 [2024-11-16 16:42:37.207891] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7aa0 is same with the state(5) to be set 00:22:59.792 [2024-11-16 16:42:37.207904] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec7aa0 (9): Bad file descriptor 00:22:59.792 [2024-11-16 16:42:37.207931] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:59.792 [2024-11-16 16:42:37.207940] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:59.792 [2024-11-16 16:42:37.207947] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:59.792 [2024-11-16 16:42:37.207959] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:59.792 [2024-11-16 16:42:37.213673] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:59.792 [2024-11-16 16:42:37.213739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.792 [2024-11-16 16:42:37.213777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.792 [2024-11-16 16:42:37.213791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeb2760 with addr=10.0.0.3, port=4420 00:22:59.792 [2024-11-16 16:42:37.213800] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2760 is same with the state(5) to be set 00:22:59.792 [2024-11-16 16:42:37.213814] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeb2760 (9): Bad file descriptor 00:22:59.792 [2024-11-16 16:42:37.213826] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:59.792 [2024-11-16 16:42:37.213833] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:59.792 [2024-11-16 16:42:37.213841] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:59.792 [2024-11-16 16:42:37.213853] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:59.792 [2024-11-16 16:42:37.215928] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:22:59.792 [2024-11-16 16:42:37.215952] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:59.792 [2024-11-16 16:42:37.215970] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:59.792 [2024-11-16 16:42:37.215999] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 not found 00:22:59.792 [2024-11-16 16:42:37.216012] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:22:59.792 [2024-11-16 16:42:37.216025] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:00.051 [2024-11-16 16:42:37.301991] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:00.051 [2024-11-16 16:42:37.302994] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:00.618 16:42:38 -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:23:00.618 16:42:38 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:00.618 16:42:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.618 16:42:38 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:00.618 16:42:38 -- common/autotest_common.sh@10 -- # set +x 00:23:00.618 16:42:38 -- host/mdns_discovery.sh@68 -- # sort 00:23:00.618 16:42:38 -- host/mdns_discovery.sh@68 -- # xargs 00:23:00.876 16:42:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.876 16:42:38 -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:23:00.876 16:42:38 -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:23:00.876 16:42:38 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:00.876 16:42:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.876 16:42:38 -- common/autotest_common.sh@10 -- # set +x 00:23:00.876 16:42:38 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:00.876 16:42:38 -- host/mdns_discovery.sh@64 -- # sort 00:23:00.876 16:42:38 -- host/mdns_discovery.sh@64 -- # xargs 00:23:00.876 16:42:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.876 16:42:38 -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:00.876 16:42:38 -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:23:00.876 16:42:38 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:23:00.876 16:42:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.876 16:42:38 -- common/autotest_common.sh@10 -- # set +x 00:23:00.876 16:42:38 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:00.876 16:42:38 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:00.876 16:42:38 -- host/mdns_discovery.sh@72 -- # xargs 00:23:00.876 16:42:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.876 16:42:38 -- host/mdns_discovery.sh@166 -- # [[ 4421 == \4\4\2\1 ]] 00:23:00.876 16:42:38 -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:23:00.876 16:42:38 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:00.876 16:42:38 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:23:00.876 16:42:38 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:00.876 16:42:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.876 16:42:38 -- common/autotest_common.sh@10 -- # set +x 00:23:00.876 16:42:38 -- host/mdns_discovery.sh@72 -- # xargs 00:23:00.876 16:42:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.876 16:42:38 -- host/mdns_discovery.sh@167 -- # [[ 4421 == \4\4\2\1 ]] 00:23:00.876 16:42:38 -- host/mdns_discovery.sh@168 -- # get_notification_count 00:23:00.876 16:42:38 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:23:00.876 16:42:38 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:00.876 16:42:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.876 16:42:38 -- common/autotest_common.sh@10 -- # set +x 00:23:00.876 16:42:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.876 16:42:38 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:23:00.876 16:42:38 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:23:00.876 16:42:38 -- host/mdns_discovery.sh@169 -- # [[ 0 == 0 ]] 00:23:00.876 16:42:38 -- host/mdns_discovery.sh@171 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:23:00.876 16:42:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.876 16:42:38 -- common/autotest_common.sh@10 -- # set +x 00:23:00.876 16:42:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.877 16:42:38 -- host/mdns_discovery.sh@172 -- # sleep 1 00:23:01.135 [2024-11-16 16:42:38.416288] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:23:02.070 16:42:39 -- host/mdns_discovery.sh@174 -- # get_mdns_discovery_svcs 00:23:02.070 16:42:39 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:23:02.070 16:42:39 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:23:02.070 16:42:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.070 16:42:39 -- host/mdns_discovery.sh@80 -- # sort 00:23:02.070 16:42:39 -- common/autotest_common.sh@10 -- # set +x 00:23:02.070 16:42:39 -- host/mdns_discovery.sh@80 -- # xargs 00:23:02.070 16:42:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.070 16:42:39 -- host/mdns_discovery.sh@174 -- # [[ '' == '' ]] 00:23:02.070 16:42:39 -- host/mdns_discovery.sh@175 -- # get_subsystem_names 00:23:02.070 16:42:39 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:02.070 16:42:39 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:02.070 16:42:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.070 16:42:39 -- host/mdns_discovery.sh@68 -- # sort 00:23:02.070 16:42:39 -- common/autotest_common.sh@10 -- # set +x 00:23:02.070 16:42:39 -- host/mdns_discovery.sh@68 -- # xargs 00:23:02.070 16:42:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.070 16:42:39 -- host/mdns_discovery.sh@175 -- # [[ '' == '' ]] 00:23:02.070 16:42:39 -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:23:02.070 16:42:39 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:02.070 16:42:39 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:02.070 16:42:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.070 16:42:39 -- common/autotest_common.sh@10 -- # set +x 00:23:02.070 16:42:39 -- host/mdns_discovery.sh@64 -- # sort 00:23:02.070 16:42:39 -- host/mdns_discovery.sh@64 -- # xargs 00:23:02.070 16:42:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.070 16:42:39 -- host/mdns_discovery.sh@176 -- # [[ '' == '' ]] 00:23:02.070 16:42:39 -- host/mdns_discovery.sh@177 -- # get_notification_count 00:23:02.070 16:42:39 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:02.070 16:42:39 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:23:02.070 16:42:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.070 16:42:39 -- common/autotest_common.sh@10 -- # set +x 00:23:02.070 16:42:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.070 16:42:39 -- host/mdns_discovery.sh@87 -- # notification_count=4 00:23:02.070 16:42:39 -- host/mdns_discovery.sh@88 -- # notify_id=8 00:23:02.070 16:42:39 -- host/mdns_discovery.sh@178 -- # [[ 4 == 4 ]] 00:23:02.070 16:42:39 -- host/mdns_discovery.sh@181 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:02.070 16:42:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.070 16:42:39 -- common/autotest_common.sh@10 -- # set +x 00:23:02.070 16:42:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.070 16:42:39 -- host/mdns_discovery.sh@182 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:23:02.070 16:42:39 -- common/autotest_common.sh@650 -- # local es=0 00:23:02.070 16:42:39 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:23:02.070 16:42:39 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:02.070 16:42:39 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:02.070 16:42:39 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:02.070 16:42:39 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:02.070 16:42:39 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:23:02.070 16:42:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.070 16:42:39 -- common/autotest_common.sh@10 -- # set +x 00:23:02.070 [2024-11-16 16:42:39.526324] bdev_mdns_client.c: 470:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:23:02.070 2024/11/16 16:42:39 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:23:02.070 request: 00:23:02.070 { 00:23:02.070 "method": "bdev_nvme_start_mdns_discovery", 00:23:02.070 "params": { 00:23:02.070 "name": "mdns", 00:23:02.070 "svcname": "_nvme-disc._http", 00:23:02.070 "hostnqn": "nqn.2021-12.io.spdk:test" 00:23:02.070 } 00:23:02.070 } 00:23:02.070 Got JSON-RPC error response 00:23:02.070 GoRPCClient: error on JSON-RPC call 00:23:02.070 16:42:39 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:02.071 16:42:39 -- common/autotest_common.sh@653 -- # es=1 00:23:02.071 16:42:39 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:02.071 16:42:39 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:02.071 16:42:39 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:02.071 16:42:39 -- host/mdns_discovery.sh@183 -- # sleep 5 00:23:02.637 [2024-11-16 16:42:39.914908] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:23:02.637 [2024-11-16 16:42:40.014905] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:23:02.637 [2024-11-16 16:42:40.114916] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:23:02.637 [2024-11-16 16:42:40.114938] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:23:02.637 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:23:02.637 cookie is 0 00:23:02.637 is_local: 1 00:23:02.637 our_own: 0 00:23:02.637 wide_area: 0 00:23:02.637 multicast: 1 00:23:02.637 cached: 1 00:23:02.895 [2024-11-16 16:42:40.214912] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:23:02.895 [2024-11-16 16:42:40.214933] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.2) 00:23:02.895 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:23:02.895 cookie is 0 00:23:02.895 is_local: 1 00:23:02.895 our_own: 0 00:23:02.895 wide_area: 0 00:23:02.895 multicast: 1 00:23:02.895 cached: 1 00:23:03.830 [2024-11-16 16:42:41.118498] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:23:03.830 [2024-11-16 16:42:41.118520] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:23:03.830 [2024-11-16 16:42:41.118535] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:03.830 [2024-11-16 16:42:41.204585] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new subsystem mdns0_nvme0 00:23:03.830 [2024-11-16 16:42:41.218285] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:03.830 [2024-11-16 16:42:41.218303] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:03.830 [2024-11-16 16:42:41.218317] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:03.830 [2024-11-16 16:42:41.265134] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:23:03.830 [2024-11-16 16:42:41.265158] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:03.830 [2024-11-16 16:42:41.304262] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem mdns1_nvme0 00:23:04.088 [2024-11-16 16:42:41.362779] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:23:04.088 [2024-11-16 16:42:41.362804] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:07.369 16:42:44 -- host/mdns_discovery.sh@185 -- # get_mdns_discovery_svcs 00:23:07.369 16:42:44 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:23:07.369 16:42:44 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:23:07.369 16:42:44 -- host/mdns_discovery.sh@80 -- # sort 00:23:07.369 16:42:44 -- host/mdns_discovery.sh@80 -- # xargs 00:23:07.369 16:42:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.369 16:42:44 -- common/autotest_common.sh@10 -- # set +x 00:23:07.369 16:42:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.369 16:42:44 -- host/mdns_discovery.sh@185 -- # [[ mdns == \m\d\n\s ]] 00:23:07.369 16:42:44 -- host/mdns_discovery.sh@186 -- # get_discovery_ctrlrs 00:23:07.369 16:42:44 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:07.369 16:42:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.369 16:42:44 -- common/autotest_common.sh@10 -- # set +x 00:23:07.369 16:42:44 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:23:07.369 16:42:44 -- host/mdns_discovery.sh@76 -- # xargs 00:23:07.369 16:42:44 -- host/mdns_discovery.sh@76 -- # sort 00:23:07.369 16:42:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.369 16:42:44 -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:23:07.369 16:42:44 -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:23:07.369 16:42:44 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:07.369 16:42:44 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:07.369 16:42:44 -- host/mdns_discovery.sh@64 -- # sort 00:23:07.369 16:42:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.369 16:42:44 -- host/mdns_discovery.sh@64 -- # xargs 00:23:07.369 16:42:44 -- common/autotest_common.sh@10 -- # set +x 00:23:07.369 16:42:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.369 16:42:44 -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:07.369 16:42:44 -- host/mdns_discovery.sh@190 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:07.369 16:42:44 -- common/autotest_common.sh@650 -- # local es=0 00:23:07.369 16:42:44 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:07.369 16:42:44 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:07.369 16:42:44 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:07.369 16:42:44 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:07.369 16:42:44 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:07.369 16:42:44 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:07.369 16:42:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.369 16:42:44 -- common/autotest_common.sh@10 -- # set +x 00:23:07.369 [2024-11-16 16:42:44.716005] bdev_mdns_client.c: 475:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:23:07.369 2024/11/16 16:42:44 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:23:07.369 request: 00:23:07.369 { 00:23:07.369 "method": "bdev_nvme_start_mdns_discovery", 00:23:07.369 "params": { 00:23:07.369 "name": "cdc", 00:23:07.369 "svcname": "_nvme-disc._tcp", 00:23:07.369 "hostnqn": "nqn.2021-12.io.spdk:test" 00:23:07.369 } 00:23:07.369 } 00:23:07.369 Got JSON-RPC error response 00:23:07.369 GoRPCClient: error on JSON-RPC call 00:23:07.369 16:42:44 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:07.369 16:42:44 -- common/autotest_common.sh@653 -- # es=1 00:23:07.369 16:42:44 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:07.369 16:42:44 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:07.369 16:42:44 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:07.369 16:42:44 -- host/mdns_discovery.sh@191 -- # get_discovery_ctrlrs 00:23:07.369 16:42:44 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:07.369 16:42:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.369 16:42:44 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:23:07.369 16:42:44 -- common/autotest_common.sh@10 -- # set +x 00:23:07.369 16:42:44 -- host/mdns_discovery.sh@76 -- # sort 00:23:07.369 16:42:44 -- host/mdns_discovery.sh@76 -- # xargs 00:23:07.369 16:42:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.369 16:42:44 -- host/mdns_discovery.sh@191 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:23:07.369 16:42:44 -- host/mdns_discovery.sh@192 -- # get_bdev_list 00:23:07.369 16:42:44 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:07.369 16:42:44 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:07.369 16:42:44 -- host/mdns_discovery.sh@64 -- # sort 00:23:07.369 16:42:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.369 16:42:44 -- host/mdns_discovery.sh@64 -- # xargs 00:23:07.369 16:42:44 -- common/autotest_common.sh@10 -- # set +x 00:23:07.369 16:42:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.369 16:42:44 -- host/mdns_discovery.sh@192 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:07.369 16:42:44 -- host/mdns_discovery.sh@193 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:23:07.369 16:42:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.369 16:42:44 -- common/autotest_common.sh@10 -- # set +x 00:23:07.369 16:42:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.369 16:42:44 -- host/mdns_discovery.sh@195 -- # trap - SIGINT SIGTERM EXIT 00:23:07.369 16:42:44 -- host/mdns_discovery.sh@197 -- # kill 98541 00:23:07.369 16:42:44 -- host/mdns_discovery.sh@200 -- # wait 98541 00:23:07.628 [2024-11-16 16:42:44.975479] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:23:07.628 16:42:45 -- host/mdns_discovery.sh@201 -- # kill 98628 00:23:07.628 Got SIGTERM, quitting. 00:23:07.628 16:42:45 -- host/mdns_discovery.sh@202 -- # kill 98577 00:23:07.628 16:42:45 -- host/mdns_discovery.sh@203 -- # nvmftestfini 00:23:07.628 16:42:45 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:07.628 16:42:45 -- nvmf/common.sh@116 -- # sync 00:23:07.628 Got SIGTERM, quitting. 00:23:07.628 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:23:07.628 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:23:07.628 avahi-daemon 0.8 exiting. 00:23:07.887 16:42:45 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:07.887 16:42:45 -- nvmf/common.sh@119 -- # set +e 00:23:07.887 16:42:45 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:07.887 16:42:45 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:07.887 rmmod nvme_tcp 00:23:07.887 rmmod nvme_fabrics 00:23:07.887 rmmod nvme_keyring 00:23:07.887 16:42:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:07.887 16:42:45 -- nvmf/common.sh@123 -- # set -e 00:23:07.887 16:42:45 -- nvmf/common.sh@124 -- # return 0 00:23:07.887 16:42:45 -- nvmf/common.sh@477 -- # '[' -n 98510 ']' 00:23:07.887 16:42:45 -- nvmf/common.sh@478 -- # killprocess 98510 00:23:07.887 16:42:45 -- common/autotest_common.sh@936 -- # '[' -z 98510 ']' 00:23:07.887 16:42:45 -- common/autotest_common.sh@940 -- # kill -0 98510 00:23:07.887 16:42:45 -- common/autotest_common.sh@941 -- # uname 00:23:07.887 16:42:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:07.887 16:42:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 98510 00:23:07.887 killing process with pid 98510 00:23:07.887 16:42:45 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:07.887 16:42:45 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:07.887 16:42:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 98510' 00:23:07.887 16:42:45 -- common/autotest_common.sh@955 -- # kill 98510 00:23:07.887 16:42:45 -- common/autotest_common.sh@960 -- # wait 98510 00:23:08.146 16:42:45 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:08.146 16:42:45 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:08.146 16:42:45 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:08.146 16:42:45 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:08.146 16:42:45 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:08.146 16:42:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:08.146 16:42:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:08.146 16:42:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:08.146 16:42:45 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:23:08.146 00:23:08.146 real 0m19.816s 00:23:08.146 user 0m39.184s 00:23:08.146 sys 0m1.910s 00:23:08.146 16:42:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:08.146 ************************************ 00:23:08.146 END TEST nvmf_mdns_discovery 00:23:08.146 ************************************ 00:23:08.146 16:42:45 -- common/autotest_common.sh@10 -- # set +x 00:23:08.146 16:42:45 -- nvmf/nvmf.sh@115 -- # [[ 1 -eq 1 ]] 00:23:08.146 16:42:45 -- nvmf/nvmf.sh@116 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:23:08.146 16:42:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:08.146 16:42:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:08.146 16:42:45 -- common/autotest_common.sh@10 -- # set +x 00:23:08.146 ************************************ 00:23:08.146 START TEST nvmf_multipath 00:23:08.146 ************************************ 00:23:08.146 16:42:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:23:08.146 * Looking for test storage... 00:23:08.146 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:08.146 16:42:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:23:08.146 16:42:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:23:08.146 16:42:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:23:08.406 16:42:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:23:08.406 16:42:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:23:08.406 16:42:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:23:08.406 16:42:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:23:08.406 16:42:45 -- scripts/common.sh@335 -- # IFS=.-: 00:23:08.406 16:42:45 -- scripts/common.sh@335 -- # read -ra ver1 00:23:08.406 16:42:45 -- scripts/common.sh@336 -- # IFS=.-: 00:23:08.406 16:42:45 -- scripts/common.sh@336 -- # read -ra ver2 00:23:08.406 16:42:45 -- scripts/common.sh@337 -- # local 'op=<' 00:23:08.406 16:42:45 -- scripts/common.sh@339 -- # ver1_l=2 00:23:08.406 16:42:45 -- scripts/common.sh@340 -- # ver2_l=1 00:23:08.406 16:42:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:23:08.406 16:42:45 -- scripts/common.sh@343 -- # case "$op" in 00:23:08.406 16:42:45 -- scripts/common.sh@344 -- # : 1 00:23:08.406 16:42:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:23:08.406 16:42:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:08.406 16:42:45 -- scripts/common.sh@364 -- # decimal 1 00:23:08.406 16:42:45 -- scripts/common.sh@352 -- # local d=1 00:23:08.406 16:42:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:08.406 16:42:45 -- scripts/common.sh@354 -- # echo 1 00:23:08.406 16:42:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:23:08.406 16:42:45 -- scripts/common.sh@365 -- # decimal 2 00:23:08.406 16:42:45 -- scripts/common.sh@352 -- # local d=2 00:23:08.406 16:42:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:08.406 16:42:45 -- scripts/common.sh@354 -- # echo 2 00:23:08.406 16:42:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:23:08.406 16:42:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:23:08.406 16:42:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:23:08.406 16:42:45 -- scripts/common.sh@367 -- # return 0 00:23:08.406 16:42:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:08.406 16:42:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:23:08.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:08.406 --rc genhtml_branch_coverage=1 00:23:08.406 --rc genhtml_function_coverage=1 00:23:08.406 --rc genhtml_legend=1 00:23:08.406 --rc geninfo_all_blocks=1 00:23:08.406 --rc geninfo_unexecuted_blocks=1 00:23:08.406 00:23:08.406 ' 00:23:08.406 16:42:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:23:08.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:08.406 --rc genhtml_branch_coverage=1 00:23:08.406 --rc genhtml_function_coverage=1 00:23:08.406 --rc genhtml_legend=1 00:23:08.406 --rc geninfo_all_blocks=1 00:23:08.406 --rc geninfo_unexecuted_blocks=1 00:23:08.406 00:23:08.406 ' 00:23:08.406 16:42:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:23:08.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:08.406 --rc genhtml_branch_coverage=1 00:23:08.406 --rc genhtml_function_coverage=1 00:23:08.406 --rc genhtml_legend=1 00:23:08.406 --rc geninfo_all_blocks=1 00:23:08.406 --rc geninfo_unexecuted_blocks=1 00:23:08.406 00:23:08.406 ' 00:23:08.406 16:42:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:23:08.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:08.406 --rc genhtml_branch_coverage=1 00:23:08.406 --rc genhtml_function_coverage=1 00:23:08.406 --rc genhtml_legend=1 00:23:08.406 --rc geninfo_all_blocks=1 00:23:08.406 --rc geninfo_unexecuted_blocks=1 00:23:08.406 00:23:08.406 ' 00:23:08.406 16:42:45 -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:08.406 16:42:45 -- nvmf/common.sh@7 -- # uname -s 00:23:08.406 16:42:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:08.406 16:42:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:08.406 16:42:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:08.406 16:42:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:08.406 16:42:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:08.406 16:42:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:08.406 16:42:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:08.406 16:42:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:08.406 16:42:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:08.406 16:42:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:08.406 16:42:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:23:08.406 16:42:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:23:08.406 16:42:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:08.406 16:42:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:08.406 16:42:45 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:08.406 16:42:45 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:08.406 16:42:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:08.406 16:42:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:08.406 16:42:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:08.406 16:42:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:08.406 16:42:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:08.406 16:42:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:08.406 16:42:45 -- paths/export.sh@5 -- # export PATH 00:23:08.406 16:42:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:08.406 16:42:45 -- nvmf/common.sh@46 -- # : 0 00:23:08.406 16:42:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:08.406 16:42:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:08.406 16:42:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:08.406 16:42:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:08.406 16:42:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:08.406 16:42:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:08.406 16:42:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:08.406 16:42:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:08.406 16:42:45 -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:08.406 16:42:45 -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:08.406 16:42:45 -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:08.406 16:42:45 -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:23:08.406 16:42:45 -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:08.406 16:42:45 -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:08.406 16:42:45 -- host/multipath.sh@30 -- # nvmftestinit 00:23:08.406 16:42:45 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:08.406 16:42:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:08.406 16:42:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:08.406 16:42:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:08.406 16:42:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:08.406 16:42:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:08.406 16:42:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:08.406 16:42:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:08.406 16:42:45 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:23:08.406 16:42:45 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:23:08.406 16:42:45 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:23:08.406 16:42:45 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:23:08.406 16:42:45 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:23:08.406 16:42:45 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:23:08.406 16:42:45 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:08.406 16:42:45 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:08.406 16:42:45 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:08.406 16:42:45 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:23:08.406 16:42:45 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:08.406 16:42:45 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:08.406 16:42:45 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:08.406 16:42:45 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:08.406 16:42:45 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:08.406 16:42:45 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:08.406 16:42:45 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:08.406 16:42:45 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:08.406 16:42:45 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:23:08.406 16:42:45 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:23:08.406 Cannot find device "nvmf_tgt_br" 00:23:08.406 16:42:45 -- nvmf/common.sh@154 -- # true 00:23:08.406 16:42:45 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:23:08.406 Cannot find device "nvmf_tgt_br2" 00:23:08.406 16:42:45 -- nvmf/common.sh@155 -- # true 00:23:08.407 16:42:45 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:23:08.407 16:42:45 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:23:08.407 Cannot find device "nvmf_tgt_br" 00:23:08.407 16:42:45 -- nvmf/common.sh@157 -- # true 00:23:08.407 16:42:45 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:23:08.407 Cannot find device "nvmf_tgt_br2" 00:23:08.407 16:42:45 -- nvmf/common.sh@158 -- # true 00:23:08.407 16:42:45 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:23:08.407 16:42:45 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:23:08.407 16:42:45 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:08.407 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:08.407 16:42:45 -- nvmf/common.sh@161 -- # true 00:23:08.407 16:42:45 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:08.407 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:08.407 16:42:45 -- nvmf/common.sh@162 -- # true 00:23:08.407 16:42:45 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:23:08.407 16:42:45 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:08.407 16:42:45 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:08.407 16:42:45 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:08.407 16:42:45 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:08.407 16:42:45 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:08.665 16:42:45 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:08.665 16:42:45 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:08.665 16:42:45 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:08.666 16:42:45 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:23:08.666 16:42:45 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:23:08.666 16:42:45 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:23:08.666 16:42:45 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:23:08.666 16:42:45 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:08.666 16:42:45 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:08.666 16:42:45 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:08.666 16:42:45 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:23:08.666 16:42:45 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:23:08.666 16:42:45 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:23:08.666 16:42:45 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:08.666 16:42:45 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:08.666 16:42:46 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:08.666 16:42:46 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:08.666 16:42:46 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:23:08.666 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:08.666 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:23:08.666 00:23:08.666 --- 10.0.0.2 ping statistics --- 00:23:08.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:08.666 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:23:08.666 16:42:46 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:23:08.666 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:08.666 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:23:08.666 00:23:08.666 --- 10.0.0.3 ping statistics --- 00:23:08.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:08.666 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:23:08.666 16:42:46 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:08.666 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:08.666 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:23:08.666 00:23:08.666 --- 10.0.0.1 ping statistics --- 00:23:08.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:08.666 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:23:08.666 16:42:46 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:08.666 16:42:46 -- nvmf/common.sh@421 -- # return 0 00:23:08.666 16:42:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:08.666 16:42:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:08.666 16:42:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:08.666 16:42:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:08.666 16:42:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:08.666 16:42:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:08.666 16:42:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:08.666 16:42:46 -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:23:08.666 16:42:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:08.666 16:42:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:08.666 16:42:46 -- common/autotest_common.sh@10 -- # set +x 00:23:08.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:08.666 16:42:46 -- nvmf/common.sh@469 -- # nvmfpid=99148 00:23:08.666 16:42:46 -- nvmf/common.sh@470 -- # waitforlisten 99148 00:23:08.666 16:42:46 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:08.666 16:42:46 -- common/autotest_common.sh@829 -- # '[' -z 99148 ']' 00:23:08.666 16:42:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:08.666 16:42:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:08.666 16:42:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:08.666 16:42:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:08.666 16:42:46 -- common/autotest_common.sh@10 -- # set +x 00:23:08.666 [2024-11-16 16:42:46.127381] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:08.666 [2024-11-16 16:42:46.127623] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:08.924 [2024-11-16 16:42:46.263426] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:08.924 [2024-11-16 16:42:46.338215] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:08.924 [2024-11-16 16:42:46.338522] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:08.924 [2024-11-16 16:42:46.338543] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:08.924 [2024-11-16 16:42:46.338552] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:08.924 [2024-11-16 16:42:46.338713] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:08.924 [2024-11-16 16:42:46.338726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:09.858 16:42:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:09.858 16:42:47 -- common/autotest_common.sh@862 -- # return 0 00:23:09.858 16:42:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:09.858 16:42:47 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:09.858 16:42:47 -- common/autotest_common.sh@10 -- # set +x 00:23:09.858 16:42:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:09.858 16:42:47 -- host/multipath.sh@33 -- # nvmfapp_pid=99148 00:23:09.858 16:42:47 -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:10.116 [2024-11-16 16:42:47.386553] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:10.116 16:42:47 -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:10.374 Malloc0 00:23:10.374 16:42:47 -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:10.632 16:42:48 -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:10.889 16:42:48 -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:11.146 [2024-11-16 16:42:48.491272] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:11.146 16:42:48 -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:11.405 [2024-11-16 16:42:48.699406] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:11.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:11.405 16:42:48 -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:11.405 16:42:48 -- host/multipath.sh@44 -- # bdevperf_pid=99256 00:23:11.405 16:42:48 -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:11.405 16:42:48 -- host/multipath.sh@47 -- # waitforlisten 99256 /var/tmp/bdevperf.sock 00:23:11.405 16:42:48 -- common/autotest_common.sh@829 -- # '[' -z 99256 ']' 00:23:11.405 16:42:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:11.405 16:42:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:11.405 16:42:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:11.405 16:42:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:11.405 16:42:48 -- common/autotest_common.sh@10 -- # set +x 00:23:12.337 16:42:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:12.337 16:42:49 -- common/autotest_common.sh@862 -- # return 0 00:23:12.337 16:42:49 -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:12.596 16:42:49 -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:23:12.854 Nvme0n1 00:23:12.854 16:42:50 -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:13.421 Nvme0n1 00:23:13.421 16:42:50 -- host/multipath.sh@78 -- # sleep 1 00:23:13.421 16:42:50 -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:14.356 16:42:51 -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:23:14.356 16:42:51 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:14.614 16:42:51 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:14.872 16:42:52 -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:23:14.872 16:42:52 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 99148 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:14.872 16:42:52 -- host/multipath.sh@65 -- # dtrace_pid=99339 00:23:14.872 16:42:52 -- host/multipath.sh@66 -- # sleep 6 00:23:21.435 16:42:58 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:21.435 16:42:58 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:23:21.435 16:42:58 -- host/multipath.sh@67 -- # active_port=4421 00:23:21.435 16:42:58 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:21.435 Attaching 4 probes... 00:23:21.435 @path[10.0.0.2, 4421]: 20425 00:23:21.435 @path[10.0.0.2, 4421]: 21196 00:23:21.435 @path[10.0.0.2, 4421]: 20912 00:23:21.435 @path[10.0.0.2, 4421]: 20941 00:23:21.435 @path[10.0.0.2, 4421]: 21141 00:23:21.435 16:42:58 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:21.435 16:42:58 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:21.435 16:42:58 -- host/multipath.sh@69 -- # sed -n 1p 00:23:21.435 16:42:58 -- host/multipath.sh@69 -- # port=4421 00:23:21.435 16:42:58 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:23:21.435 16:42:58 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:23:21.435 16:42:58 -- host/multipath.sh@72 -- # kill 99339 00:23:21.435 16:42:58 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:21.435 16:42:58 -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:23:21.435 16:42:58 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:21.435 16:42:58 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:21.435 16:42:58 -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:23:21.435 16:42:58 -- host/multipath.sh@65 -- # dtrace_pid=99477 00:23:21.435 16:42:58 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 99148 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:21.435 16:42:58 -- host/multipath.sh@66 -- # sleep 6 00:23:28.046 16:43:04 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:28.046 16:43:04 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:23:28.046 16:43:05 -- host/multipath.sh@67 -- # active_port=4420 00:23:28.046 16:43:05 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:28.046 Attaching 4 probes... 00:23:28.046 @path[10.0.0.2, 4420]: 23311 00:23:28.046 @path[10.0.0.2, 4420]: 23849 00:23:28.046 @path[10.0.0.2, 4420]: 23830 00:23:28.046 @path[10.0.0.2, 4420]: 23805 00:23:28.046 @path[10.0.0.2, 4420]: 23796 00:23:28.046 16:43:05 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:28.046 16:43:05 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:28.046 16:43:05 -- host/multipath.sh@69 -- # sed -n 1p 00:23:28.046 16:43:05 -- host/multipath.sh@69 -- # port=4420 00:23:28.046 16:43:05 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:23:28.046 16:43:05 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:23:28.046 16:43:05 -- host/multipath.sh@72 -- # kill 99477 00:23:28.046 16:43:05 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:28.046 16:43:05 -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:23:28.046 16:43:05 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:28.046 16:43:05 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:28.304 16:43:05 -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:23:28.304 16:43:05 -- host/multipath.sh@65 -- # dtrace_pid=99612 00:23:28.304 16:43:05 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 99148 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:28.304 16:43:05 -- host/multipath.sh@66 -- # sleep 6 00:23:34.871 16:43:11 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:34.871 16:43:11 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:23:34.871 16:43:11 -- host/multipath.sh@67 -- # active_port=4421 00:23:34.871 16:43:11 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:34.871 Attaching 4 probes... 00:23:34.871 @path[10.0.0.2, 4421]: 15369 00:23:34.871 @path[10.0.0.2, 4421]: 21800 00:23:34.871 @path[10.0.0.2, 4421]: 21755 00:23:34.871 @path[10.0.0.2, 4421]: 21668 00:23:34.871 @path[10.0.0.2, 4421]: 21739 00:23:34.871 16:43:11 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:34.871 16:43:11 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:34.871 16:43:11 -- host/multipath.sh@69 -- # sed -n 1p 00:23:34.871 16:43:11 -- host/multipath.sh@69 -- # port=4421 00:23:34.871 16:43:11 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:23:34.871 16:43:11 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:23:34.871 16:43:11 -- host/multipath.sh@72 -- # kill 99612 00:23:34.871 16:43:11 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:34.871 16:43:11 -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:23:34.871 16:43:11 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:34.871 16:43:12 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:35.130 16:43:12 -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:23:35.130 16:43:12 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 99148 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:35.130 16:43:12 -- host/multipath.sh@65 -- # dtrace_pid=99738 00:23:35.130 16:43:12 -- host/multipath.sh@66 -- # sleep 6 00:23:41.697 16:43:18 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:41.697 16:43:18 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:23:41.697 16:43:18 -- host/multipath.sh@67 -- # active_port= 00:23:41.697 16:43:18 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:41.697 Attaching 4 probes... 00:23:41.697 00:23:41.697 00:23:41.697 00:23:41.697 00:23:41.697 00:23:41.697 16:43:18 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:41.697 16:43:18 -- host/multipath.sh@69 -- # sed -n 1p 00:23:41.697 16:43:18 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:41.697 16:43:18 -- host/multipath.sh@69 -- # port= 00:23:41.697 16:43:18 -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:23:41.697 16:43:18 -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:23:41.697 16:43:18 -- host/multipath.sh@72 -- # kill 99738 00:23:41.697 16:43:18 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:41.697 16:43:18 -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:23:41.697 16:43:18 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:41.697 16:43:19 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:41.956 16:43:19 -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:23:41.956 16:43:19 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 99148 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:41.956 16:43:19 -- host/multipath.sh@65 -- # dtrace_pid=99874 00:23:41.956 16:43:19 -- host/multipath.sh@66 -- # sleep 6 00:23:48.526 16:43:25 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:48.526 16:43:25 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:23:48.526 16:43:25 -- host/multipath.sh@67 -- # active_port=4421 00:23:48.526 16:43:25 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:48.526 Attaching 4 probes... 00:23:48.526 @path[10.0.0.2, 4421]: 20922 00:23:48.526 @path[10.0.0.2, 4421]: 21556 00:23:48.526 @path[10.0.0.2, 4421]: 21502 00:23:48.526 @path[10.0.0.2, 4421]: 21540 00:23:48.526 @path[10.0.0.2, 4421]: 21533 00:23:48.526 16:43:25 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:48.526 16:43:25 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:48.526 16:43:25 -- host/multipath.sh@69 -- # sed -n 1p 00:23:48.526 16:43:25 -- host/multipath.sh@69 -- # port=4421 00:23:48.526 16:43:25 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:23:48.526 16:43:25 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:23:48.526 16:43:25 -- host/multipath.sh@72 -- # kill 99874 00:23:48.526 16:43:25 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:48.526 16:43:25 -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:48.526 [2024-11-16 16:43:25.727797] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda9370 is same with the state(5) to be set 00:23:48.526 [2024-11-16 16:43:25.727864] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda9370 is same with the state(5) to be set 00:23:48.526 [2024-11-16 16:43:25.727886] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda9370 is same with the state(5) to be set 00:23:48.526 [2024-11-16 16:43:25.727896] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda9370 is same with the state(5) to be set 00:23:48.526 [2024-11-16 16:43:25.727904] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda9370 is same with the state(5) to be set 00:23:48.526 [2024-11-16 16:43:25.727912] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda9370 is same with the state(5) to be set 00:23:48.526 [2024-11-16 16:43:25.727920] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda9370 is same with the state(5) to be set 00:23:48.526 [2024-11-16 16:43:25.727928] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda9370 is same with the state(5) to be set 00:23:48.526 [2024-11-16 16:43:25.727936] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda9370 is same with the state(5) to be set 00:23:48.526 [2024-11-16 16:43:25.727943] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda9370 is same with the state(5) to be set 00:23:48.526 [2024-11-16 16:43:25.727951] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda9370 is same with the state(5) to be set 00:23:48.526 [2024-11-16 16:43:25.727959] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda9370 is same with the state(5) to be set 00:23:48.526 [2024-11-16 16:43:25.727966] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda9370 is same with the state(5) to be set 00:23:48.526 [2024-11-16 16:43:25.727973] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda9370 is same with the state(5) to be set 00:23:48.526 [2024-11-16 16:43:25.727981] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda9370 is same with the state(5) to be set 00:23:48.526 [2024-11-16 16:43:25.727999] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda9370 is same with the state(5) to be set 00:23:48.526 [2024-11-16 16:43:25.728007] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda9370 is same with the state(5) to be set 00:23:48.526 [2024-11-16 16:43:25.728023] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda9370 is same with the state(5) to be set 00:23:48.526 [2024-11-16 16:43:25.728030] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda9370 is same with the state(5) to be set 00:23:48.526 [2024-11-16 16:43:25.728038] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda9370 is same with the state(5) to be set 00:23:48.526 [2024-11-16 16:43:25.728046] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda9370 is same with the state(5) to be set 00:23:48.526 [2024-11-16 16:43:25.728084] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda9370 is same with the state(5) to be set 00:23:48.526 [2024-11-16 16:43:25.728105] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda9370 is same with the state(5) to be set 00:23:48.526 [2024-11-16 16:43:25.728114] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda9370 is same with the state(5) to be set 00:23:48.526 [2024-11-16 16:43:25.728122] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda9370 is same with the state(5) to be set 00:23:48.526 [2024-11-16 16:43:25.728130] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda9370 is same with the state(5) to be set 00:23:48.526 [2024-11-16 16:43:25.728138] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda9370 is same with the state(5) to be set 00:23:48.526 [2024-11-16 16:43:25.728146] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda9370 is same with the state(5) to be set 00:23:48.526 [2024-11-16 16:43:25.728153] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda9370 is same with the state(5) to be set 00:23:48.526 [2024-11-16 16:43:25.728161] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda9370 is same with the state(5) to be set 00:23:48.526 [2024-11-16 16:43:25.728169] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda9370 is same with the state(5) to be set 00:23:48.526 [2024-11-16 16:43:25.728177] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda9370 is same with the state(5) to be set 00:23:48.526 [2024-11-16 16:43:25.728186] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda9370 is same with the state(5) to be set 00:23:48.526 [2024-11-16 16:43:25.728194] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda9370 is same with the state(5) to be set 00:23:48.526 [2024-11-16 16:43:25.728211] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda9370 is same with the state(5) to be set 00:23:48.526 [2024-11-16 16:43:25.728220] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda9370 is same with the state(5) to be set 00:23:48.526 [2024-11-16 16:43:25.728228] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda9370 is same with the state(5) to be set 00:23:48.526 [2024-11-16 16:43:25.728236] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda9370 is same with the state(5) to be set 00:23:48.526 [2024-11-16 16:43:25.728245] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda9370 is same with the state(5) to be set 00:23:48.526 [2024-11-16 16:43:25.728253] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda9370 is same with the state(5) to be set 00:23:48.526 [2024-11-16 16:43:25.728260] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda9370 is same with the state(5) to be set 00:23:48.526 [2024-11-16 16:43:25.728268] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda9370 is same with the state(5) to be set 00:23:48.526 [2024-11-16 16:43:25.728276] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda9370 is same with the state(5) to be set 00:23:48.526 [2024-11-16 16:43:25.728284] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda9370 is same with the state(5) to be set 00:23:48.526 [2024-11-16 16:43:25.728292] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda9370 is same with the state(5) to be set 00:23:48.526 [2024-11-16 16:43:25.728300] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda9370 is same with the state(5) to be set 00:23:48.526 [2024-11-16 16:43:25.728308] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda9370 is same with the state(5) to be set 00:23:48.526 [2024-11-16 16:43:25.728316] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda9370 is same with the state(5) to be set 00:23:48.526 [2024-11-16 16:43:25.728325] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda9370 is same with the state(5) to be set 00:23:48.526 [2024-11-16 16:43:25.728333] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda9370 is same with the state(5) to be set 00:23:48.526 [2024-11-16 16:43:25.728340] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda9370 is same with the state(5) to be set 00:23:48.526 [2024-11-16 16:43:25.728348] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda9370 is same with the state(5) to be set 00:23:48.526 [2024-11-16 16:43:25.728356] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda9370 is same with the state(5) to be set 00:23:48.526 [2024-11-16 16:43:25.728363] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda9370 is same with the state(5) to be set 00:23:48.526 [2024-11-16 16:43:25.728371] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda9370 is same with the state(5) to be set 00:23:48.526 [2024-11-16 16:43:25.728379] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda9370 is same with the state(5) to be set 00:23:48.526 [2024-11-16 16:43:25.728386] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda9370 is same with the state(5) to be set 00:23:48.526 [2024-11-16 16:43:25.728394] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda9370 is same with the state(5) to be set 00:23:48.526 [2024-11-16 16:43:25.728401] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda9370 is same with the state(5) to be set 00:23:48.527 [2024-11-16 16:43:25.728409] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda9370 is same with the state(5) to be set 00:23:48.527 [2024-11-16 16:43:25.728417] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda9370 is same with the state(5) to be set 00:23:48.527 [2024-11-16 16:43:25.728425] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda9370 is same with the state(5) to be set 00:23:48.527 [2024-11-16 16:43:25.728433] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda9370 is same with the state(5) to be set 00:23:48.527 [2024-11-16 16:43:25.728479] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda9370 is same with the state(5) to be set 00:23:48.527 [2024-11-16 16:43:25.728496] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda9370 is same with the state(5) to be set 00:23:48.527 [2024-11-16 16:43:25.728504] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda9370 is same with the state(5) to be set 00:23:48.527 [2024-11-16 16:43:25.728519] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda9370 is same with the state(5) to be set 00:23:48.527 [2024-11-16 16:43:25.728526] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda9370 is same with the state(5) to be set 00:23:48.527 [2024-11-16 16:43:25.728534] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda9370 is same with the state(5) to be set 00:23:48.527 [2024-11-16 16:43:25.728541] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda9370 is same with the state(5) to be set 00:23:48.527 [2024-11-16 16:43:25.728549] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda9370 is same with the state(5) to be set 00:23:48.527 [2024-11-16 16:43:25.728556] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda9370 is same with the state(5) to be set 00:23:48.527 [2024-11-16 16:43:25.728564] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda9370 is same with the state(5) to be set 00:23:48.527 [2024-11-16 16:43:25.728571] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda9370 is same with the state(5) to be set 00:23:48.527 [2024-11-16 16:43:25.728581] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda9370 is same with the state(5) to be set 00:23:48.527 [2024-11-16 16:43:25.728590] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda9370 is same with the state(5) to be set 00:23:48.527 [2024-11-16 16:43:25.728599] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda9370 is same with the state(5) to be set 00:23:48.527 [2024-11-16 16:43:25.728608] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda9370 is same with the state(5) to be set 00:23:48.527 16:43:25 -- host/multipath.sh@101 -- # sleep 1 00:23:49.464 16:43:26 -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:23:49.464 16:43:26 -- host/multipath.sh@65 -- # dtrace_pid=100004 00:23:49.464 16:43:26 -- host/multipath.sh@66 -- # sleep 6 00:23:49.464 16:43:26 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 99148 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:56.031 16:43:32 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:56.031 16:43:32 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:23:56.031 16:43:33 -- host/multipath.sh@67 -- # active_port=4420 00:23:56.031 16:43:33 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:56.031 Attaching 4 probes... 00:23:56.031 @path[10.0.0.2, 4420]: 22784 00:23:56.031 @path[10.0.0.2, 4420]: 22994 00:23:56.031 @path[10.0.0.2, 4420]: 22793 00:23:56.031 @path[10.0.0.2, 4420]: 23012 00:23:56.031 @path[10.0.0.2, 4420]: 23083 00:23:56.031 16:43:33 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:56.031 16:43:33 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:56.031 16:43:33 -- host/multipath.sh@69 -- # sed -n 1p 00:23:56.031 16:43:33 -- host/multipath.sh@69 -- # port=4420 00:23:56.031 16:43:33 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:23:56.031 16:43:33 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:23:56.031 16:43:33 -- host/multipath.sh@72 -- # kill 100004 00:23:56.031 16:43:33 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:56.031 16:43:33 -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:56.031 [2024-11-16 16:43:33.310558] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:56.031 16:43:33 -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:56.289 16:43:33 -- host/multipath.sh@111 -- # sleep 6 00:24:02.855 16:43:39 -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:24:02.855 16:43:39 -- host/multipath.sh@65 -- # dtrace_pid=100202 00:24:02.855 16:43:39 -- host/multipath.sh@66 -- # sleep 6 00:24:02.855 16:43:39 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 99148 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:08.125 16:43:45 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:08.125 16:43:45 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:24:08.384 16:43:45 -- host/multipath.sh@67 -- # active_port=4421 00:24:08.384 16:43:45 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:08.384 Attaching 4 probes... 00:24:08.384 @path[10.0.0.2, 4421]: 20929 00:24:08.384 @path[10.0.0.2, 4421]: 21322 00:24:08.384 @path[10.0.0.2, 4421]: 21306 00:24:08.384 @path[10.0.0.2, 4421]: 21320 00:24:08.384 @path[10.0.0.2, 4421]: 21323 00:24:08.384 16:43:45 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:08.384 16:43:45 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:08.384 16:43:45 -- host/multipath.sh@69 -- # sed -n 1p 00:24:08.384 16:43:45 -- host/multipath.sh@69 -- # port=4421 00:24:08.384 16:43:45 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:24:08.384 16:43:45 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:24:08.384 16:43:45 -- host/multipath.sh@72 -- # kill 100202 00:24:08.384 16:43:45 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:08.384 16:43:45 -- host/multipath.sh@114 -- # killprocess 99256 00:24:08.384 16:43:45 -- common/autotest_common.sh@936 -- # '[' -z 99256 ']' 00:24:08.384 16:43:45 -- common/autotest_common.sh@940 -- # kill -0 99256 00:24:08.384 16:43:45 -- common/autotest_common.sh@941 -- # uname 00:24:08.384 16:43:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:08.384 16:43:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 99256 00:24:08.384 16:43:45 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:24:08.384 16:43:45 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:24:08.384 killing process with pid 99256 00:24:08.384 16:43:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 99256' 00:24:08.384 16:43:45 -- common/autotest_common.sh@955 -- # kill 99256 00:24:08.384 16:43:45 -- common/autotest_common.sh@960 -- # wait 99256 00:24:08.652 Connection closed with partial response: 00:24:08.652 00:24:08.652 00:24:08.652 16:43:46 -- host/multipath.sh@116 -- # wait 99256 00:24:08.652 16:43:46 -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:08.652 [2024-11-16 16:42:48.755416] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:08.652 [2024-11-16 16:42:48.755499] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99256 ] 00:24:08.652 [2024-11-16 16:42:48.890657] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:08.652 [2024-11-16 16:42:48.957531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:08.652 Running I/O for 90 seconds... 00:24:08.652 [2024-11-16 16:42:58.904648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:129288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.652 [2024-11-16 16:42:58.904701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:08.652 [2024-11-16 16:42:58.904769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:129296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.652 [2024-11-16 16:42:58.904788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:08.652 [2024-11-16 16:42:58.904811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:129304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.652 [2024-11-16 16:42:58.904825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:08.652 [2024-11-16 16:42:58.904845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:129312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.652 [2024-11-16 16:42:58.904859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:08.652 [2024-11-16 16:42:58.904878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:129320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.652 [2024-11-16 16:42:58.904892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:08.652 [2024-11-16 16:42:58.904911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:129328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.652 [2024-11-16 16:42:58.904925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:08.652 [2024-11-16 16:42:58.904945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:129336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.652 [2024-11-16 16:42:58.904959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:08.652 [2024-11-16 16:42:58.904979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:129344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.652 [2024-11-16 16:42:58.904993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:08.652 [2024-11-16 16:42:58.905012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:129352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.652 [2024-11-16 16:42:58.905027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:08.652 [2024-11-16 16:42:58.905046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:129360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.652 [2024-11-16 16:42:58.905081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:08.652 [2024-11-16 16:42:58.905143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:129368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.652 [2024-11-16 16:42:58.905210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:08.652 [2024-11-16 16:42:58.905239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:129376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.652 [2024-11-16 16:42:58.905255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:08.652 [2024-11-16 16:42:58.905780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:129384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.652 [2024-11-16 16:42:58.905807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:08.652 [2024-11-16 16:42:58.905832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:129392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.652 [2024-11-16 16:42:58.905849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:08.652 [2024-11-16 16:42:58.905870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:129400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.652 [2024-11-16 16:42:58.905885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:08.652 [2024-11-16 16:42:58.905905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:129408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.652 [2024-11-16 16:42:58.905920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:08.652 [2024-11-16 16:42:58.905941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:129416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.652 [2024-11-16 16:42:58.905956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:08.652 [2024-11-16 16:42:58.905988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:128784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.652 [2024-11-16 16:42:58.906003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:08.652 [2024-11-16 16:42:58.906022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:128800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.652 [2024-11-16 16:42:58.906039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:08.652 [2024-11-16 16:42:58.906077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:128808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.652 [2024-11-16 16:42:58.906123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:08.652 [2024-11-16 16:42:58.906173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:128816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.652 [2024-11-16 16:42:58.906193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:08.652 [2024-11-16 16:42:58.906218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:128824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.652 [2024-11-16 16:42:58.906233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:08.653 [2024-11-16 16:42:58.906255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:128840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.653 [2024-11-16 16:42:58.906270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:08.653 [2024-11-16 16:42:58.906305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:128856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.653 [2024-11-16 16:42:58.906322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:08.653 [2024-11-16 16:42:58.906343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:128864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.653 [2024-11-16 16:42:58.906359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:08.653 [2024-11-16 16:42:58.906381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:128904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.653 [2024-11-16 16:42:58.906396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:08.653 [2024-11-16 16:42:58.906417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:128912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.653 [2024-11-16 16:42:58.906433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:08.653 [2024-11-16 16:42:58.906483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:128928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.653 [2024-11-16 16:42:58.906512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:08.653 [2024-11-16 16:42:58.906532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:128952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.653 [2024-11-16 16:42:58.906547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:08.653 [2024-11-16 16:42:58.906567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:128968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.653 [2024-11-16 16:42:58.906581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:08.653 [2024-11-16 16:42:58.906600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:128976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.653 [2024-11-16 16:42:58.906614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:08.653 [2024-11-16 16:42:58.906633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:128984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.653 [2024-11-16 16:42:58.906648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:08.653 [2024-11-16 16:42:58.906668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:128992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.653 [2024-11-16 16:42:58.906682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:08.653 [2024-11-16 16:42:58.906701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:129424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.653 [2024-11-16 16:42:58.906716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:08.653 [2024-11-16 16:42:58.906735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:129432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.653 [2024-11-16 16:42:58.906750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:08.653 [2024-11-16 16:42:58.906777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:129440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.653 [2024-11-16 16:42:58.906793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:08.653 [2024-11-16 16:42:58.906813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:129448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.653 [2024-11-16 16:42:58.906828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:08.653 [2024-11-16 16:42:58.906848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:129456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.653 [2024-11-16 16:42:58.906862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:08.653 [2024-11-16 16:42:58.906882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:129464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.653 [2024-11-16 16:42:58.906896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:08.653 [2024-11-16 16:42:58.906916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:129472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.653 [2024-11-16 16:42:58.906930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:08.653 [2024-11-16 16:42:58.907278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:129480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.653 [2024-11-16 16:42:58.907306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:08.653 [2024-11-16 16:42:58.907332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:129488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.653 [2024-11-16 16:42:58.907349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:08.653 [2024-11-16 16:42:58.907369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:129496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.653 [2024-11-16 16:42:58.907384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:08.653 [2024-11-16 16:42:58.907403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:129504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.653 [2024-11-16 16:42:58.907418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:08.653 [2024-11-16 16:42:58.907438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:129512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.653 [2024-11-16 16:42:58.907452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:08.653 [2024-11-16 16:42:58.907471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:129520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.653 [2024-11-16 16:42:58.907486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:08.653 [2024-11-16 16:42:58.907506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:129528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.653 [2024-11-16 16:42:58.907520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:08.653 [2024-11-16 16:42:58.907539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:129536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.653 [2024-11-16 16:42:58.907563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:08.653 [2024-11-16 16:42:58.907585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:129544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.653 [2024-11-16 16:42:58.907600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:08.653 [2024-11-16 16:42:58.907619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:129552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.653 [2024-11-16 16:42:58.907633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:08.653 [2024-11-16 16:42:58.907653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:129560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.653 [2024-11-16 16:42:58.907668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:08.653 [2024-11-16 16:42:58.907687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:129568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.653 [2024-11-16 16:42:58.907701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:08.653 [2024-11-16 16:42:58.907721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:129576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.653 [2024-11-16 16:42:58.907735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:08.653 [2024-11-16 16:42:58.907755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:129584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.653 [2024-11-16 16:42:58.907770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:08.653 [2024-11-16 16:42:58.907789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:129592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.653 [2024-11-16 16:42:58.907803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:08.653 [2024-11-16 16:42:58.907822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:129600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.653 [2024-11-16 16:42:58.907837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:08.653 [2024-11-16 16:42:58.907856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:129016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.653 [2024-11-16 16:42:58.907871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:08.653 [2024-11-16 16:42:58.907891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:129024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.653 [2024-11-16 16:42:58.907905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:08.653 [2024-11-16 16:42:58.907925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:129040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.653 [2024-11-16 16:42:58.907939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:08.653 [2024-11-16 16:42:58.907959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:129056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.653 [2024-11-16 16:42:58.907980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:08.653 [2024-11-16 16:42:58.908003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:129072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.654 [2024-11-16 16:42:58.908017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:08.654 [2024-11-16 16:42:58.908037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:129080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.654 [2024-11-16 16:42:58.908051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:08.654 [2024-11-16 16:42:58.908086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:129088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.654 [2024-11-16 16:42:58.908101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:08.654 [2024-11-16 16:42:58.908120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:129096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.654 [2024-11-16 16:42:58.908135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:08.654 [2024-11-16 16:42:58.908154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:129608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.654 [2024-11-16 16:42:58.908168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:08.654 [2024-11-16 16:42:58.908188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:129616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.654 [2024-11-16 16:42:58.908202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:08.654 [2024-11-16 16:42:58.908221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:129624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.654 [2024-11-16 16:42:58.908237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:08.654 [2024-11-16 16:42:58.908256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:129632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.654 [2024-11-16 16:42:58.908271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:08.654 [2024-11-16 16:42:58.908290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:129640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.654 [2024-11-16 16:42:58.908304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:08.654 [2024-11-16 16:42:58.908323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:129648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.654 [2024-11-16 16:42:58.908338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:08.654 [2024-11-16 16:42:58.908357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:129168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.654 [2024-11-16 16:42:58.908371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:08.654 [2024-11-16 16:42:58.908390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:129176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.654 [2024-11-16 16:42:58.908411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:08.654 [2024-11-16 16:42:58.908432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:129184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.654 [2024-11-16 16:42:58.908447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:08.654 [2024-11-16 16:42:58.908466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:129192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.654 [2024-11-16 16:42:58.908480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:08.654 [2024-11-16 16:42:58.908500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:129208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.654 [2024-11-16 16:42:58.908514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:08.654 [2024-11-16 16:42:58.908533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:129248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.654 [2024-11-16 16:42:58.908547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:08.654 [2024-11-16 16:42:58.908566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:129256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.654 [2024-11-16 16:42:58.908581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:08.654 [2024-11-16 16:42:58.908600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:129272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.654 [2024-11-16 16:42:58.908615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:08.654 [2024-11-16 16:42:58.908634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:129656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.654 [2024-11-16 16:42:58.908648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:08.654 [2024-11-16 16:42:58.908667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:129664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.654 [2024-11-16 16:42:58.908682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:08.654 [2024-11-16 16:42:58.909599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:129672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.654 [2024-11-16 16:42:58.909628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:08.654 [2024-11-16 16:42:58.909655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:129680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.654 [2024-11-16 16:42:58.909671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:08.654 [2024-11-16 16:42:58.909692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:129688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.654 [2024-11-16 16:42:58.909708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:08.654 [2024-11-16 16:42:58.909728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:129696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.654 [2024-11-16 16:42:58.909742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:08.654 [2024-11-16 16:42:58.909775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:129704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.654 [2024-11-16 16:42:58.909791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:08.654 [2024-11-16 16:42:58.909811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:129712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.654 [2024-11-16 16:42:58.909826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:08.654 [2024-11-16 16:42:58.909846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:129720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.654 [2024-11-16 16:42:58.909860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:08.654 [2024-11-16 16:42:58.909880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:129728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.654 [2024-11-16 16:42:58.909895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:08.654 [2024-11-16 16:42:58.909914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:129736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.654 [2024-11-16 16:42:58.909928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:08.654 [2024-11-16 16:42:58.909948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:129744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.654 [2024-11-16 16:42:58.909961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:08.654 [2024-11-16 16:42:58.909981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:129752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.654 [2024-11-16 16:42:58.909995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:08.654 [2024-11-16 16:42:58.910014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:129760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.654 [2024-11-16 16:42:58.910028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:08.654 [2024-11-16 16:42:58.910047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:129768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.654 [2024-11-16 16:42:58.910071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.654 [2024-11-16 16:42:58.910122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:129776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.654 [2024-11-16 16:42:58.910141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:08.654 [2024-11-16 16:42:58.910161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:129784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.654 [2024-11-16 16:42:58.910175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:08.654 [2024-11-16 16:42:58.910196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:129792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.654 [2024-11-16 16:42:58.910210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:08.654 [2024-11-16 16:42:58.910240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:129800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.654 [2024-11-16 16:42:58.910255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:08.654 [2024-11-16 16:42:58.910275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:129808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.654 [2024-11-16 16:42:58.910290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:08.654 [2024-11-16 16:42:58.910310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:129816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.655 [2024-11-16 16:42:58.910325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:08.655 [2024-11-16 16:42:58.910345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:129824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.655 [2024-11-16 16:42:58.910359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:08.655 [2024-11-16 16:42:58.910379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:129832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.655 [2024-11-16 16:42:58.910393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:08.655 [2024-11-16 16:42:58.910419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:129840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.655 [2024-11-16 16:42:58.910435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:08.655 [2024-11-16 16:42:58.910455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:129848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.655 [2024-11-16 16:42:58.910469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:08.655 [2024-11-16 16:42:58.910488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:129856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.655 [2024-11-16 16:42:58.910502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:08.655 [2024-11-16 16:42:58.910521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:129864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.655 [2024-11-16 16:42:58.910536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:08.655 [2024-11-16 16:42:58.910556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:129872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.655 [2024-11-16 16:42:58.910570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:08.655 [2024-11-16 16:42:58.910590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:129880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.655 [2024-11-16 16:42:58.910605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:08.655 [2024-11-16 16:42:58.910625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:129888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.655 [2024-11-16 16:42:58.910639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:08.655 [2024-11-16 16:42:58.910666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:129896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.655 [2024-11-16 16:42:58.910681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:08.655 [2024-11-16 16:42:58.910701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:129904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.655 [2024-11-16 16:42:58.910715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:08.655 [2024-11-16 16:42:58.910735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:129912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.655 [2024-11-16 16:42:58.910749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:08.655 [2024-11-16 16:42:58.910768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:129920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.655 [2024-11-16 16:42:58.910783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:08.655 [2024-11-16 16:42:58.910802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:129928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.655 [2024-11-16 16:42:58.910816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:08.655 [2024-11-16 16:42:58.910836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:129936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.655 [2024-11-16 16:42:58.910850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:08.655 [2024-11-16 16:42:58.910869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:129944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.655 [2024-11-16 16:42:58.910884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:08.655 [2024-11-16 16:42:58.910903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:129952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.655 [2024-11-16 16:42:58.910917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:08.655 [2024-11-16 16:42:58.910937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:129960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.655 [2024-11-16 16:42:58.910951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:08.655 [2024-11-16 16:42:58.910976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:129968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.655 [2024-11-16 16:42:58.910991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:08.655 [2024-11-16 16:42:58.911011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:129976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.655 [2024-11-16 16:42:58.911025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:08.655 [2024-11-16 16:42:58.911045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:129984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.655 [2024-11-16 16:42:58.911073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:08.655 [2024-11-16 16:42:58.911095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:129992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.655 [2024-11-16 16:42:58.911116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:08.655 [2024-11-16 16:42:58.911137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:130000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.655 [2024-11-16 16:42:58.911151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:08.655 [2024-11-16 16:42:58.911171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:130008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.655 [2024-11-16 16:42:58.911185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:08.655 [2024-11-16 16:42:58.911205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:130016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.655 [2024-11-16 16:42:58.911219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:08.655 [2024-11-16 16:42:58.911238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:130024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.655 [2024-11-16 16:42:58.911252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:08.655 [2024-11-16 16:42:58.911271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:130032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.655 [2024-11-16 16:42:58.911285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:08.655 [2024-11-16 16:42:58.911305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:130040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.655 [2024-11-16 16:42:58.911319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:08.655 [2024-11-16 16:42:58.911339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:130048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.655 [2024-11-16 16:42:58.911353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:08.655 [2024-11-16 16:43:05.391536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:32576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.655 [2024-11-16 16:43:05.391592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:08.655 [2024-11-16 16:43:05.391643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:32584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.655 [2024-11-16 16:43:05.391662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:08.655 [2024-11-16 16:43:05.391683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:31824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.655 [2024-11-16 16:43:05.391697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:08.655 [2024-11-16 16:43:05.391716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:31832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.655 [2024-11-16 16:43:05.391729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:08.655 [2024-11-16 16:43:05.391748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:31856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.655 [2024-11-16 16:43:05.391784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:08.655 [2024-11-16 16:43:05.391804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:31880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.655 [2024-11-16 16:43:05.391818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:08.655 [2024-11-16 16:43:05.391836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:31928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.655 [2024-11-16 16:43:05.391849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:08.655 [2024-11-16 16:43:05.391867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:31936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.655 [2024-11-16 16:43:05.391880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:08.655 [2024-11-16 16:43:05.391898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:31944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.655 [2024-11-16 16:43:05.391912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:08.655 [2024-11-16 16:43:05.391930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:31976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.656 [2024-11-16 16:43:05.391942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:08.656 [2024-11-16 16:43:05.391960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:32008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.656 [2024-11-16 16:43:05.391974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:08.656 [2024-11-16 16:43:05.391992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:32024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.656 [2024-11-16 16:43:05.392005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:08.656 [2024-11-16 16:43:05.392023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:32040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.656 [2024-11-16 16:43:05.392036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:08.656 [2024-11-16 16:43:05.392067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:32048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.656 [2024-11-16 16:43:05.392083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:08.656 [2024-11-16 16:43:05.392103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:32064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.656 [2024-11-16 16:43:05.392116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:08.656 [2024-11-16 16:43:05.392134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:32072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.656 [2024-11-16 16:43:05.392147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:08.656 [2024-11-16 16:43:05.392165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:32088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.656 [2024-11-16 16:43:05.392178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:08.656 [2024-11-16 16:43:05.392207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:32096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.656 [2024-11-16 16:43:05.392223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:08.656 [2024-11-16 16:43:05.392242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:32592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.656 [2024-11-16 16:43:05.392271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:08.656 [2024-11-16 16:43:05.392291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:32600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.656 [2024-11-16 16:43:05.392305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:08.656 [2024-11-16 16:43:05.392324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:32608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.656 [2024-11-16 16:43:05.392339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:08.656 [2024-11-16 16:43:05.392358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:32616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.656 [2024-11-16 16:43:05.392372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:08.656 [2024-11-16 16:43:05.392391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:32624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.656 [2024-11-16 16:43:05.392405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:08.656 [2024-11-16 16:43:05.392424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:32632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.656 [2024-11-16 16:43:05.392438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:08.656 [2024-11-16 16:43:05.392458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:32640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.656 [2024-11-16 16:43:05.392472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:08.656 [2024-11-16 16:43:05.392491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:32648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.656 [2024-11-16 16:43:05.392505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:08.656 [2024-11-16 16:43:05.392525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:32656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.656 [2024-11-16 16:43:05.392539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:08.656 [2024-11-16 16:43:05.392558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.656 [2024-11-16 16:43:05.392571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:08.656 [2024-11-16 16:43:05.392590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:32672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.656 [2024-11-16 16:43:05.392604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:08.656 [2024-11-16 16:43:05.392632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:32112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.656 [2024-11-16 16:43:05.392647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.656 [2024-11-16 16:43:05.392680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:32160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.656 [2024-11-16 16:43:05.392694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:08.656 [2024-11-16 16:43:05.392712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:32168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.656 [2024-11-16 16:43:05.392726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:08.656 [2024-11-16 16:43:05.392745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:32176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.656 [2024-11-16 16:43:05.392758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:08.656 [2024-11-16 16:43:05.392777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:32208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.656 [2024-11-16 16:43:05.392790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:08.656 [2024-11-16 16:43:05.392808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.656 [2024-11-16 16:43:05.392822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:08.656 [2024-11-16 16:43:05.392840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:32280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.656 [2024-11-16 16:43:05.392853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:08.656 [2024-11-16 16:43:05.392871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:32288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.656 [2024-11-16 16:43:05.392884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:08.656 [2024-11-16 16:43:05.392902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:32680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.656 [2024-11-16 16:43:05.392915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:08.656 [2024-11-16 16:43:05.392933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:32688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.656 [2024-11-16 16:43:05.392946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:08.656 [2024-11-16 16:43:05.392964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:32696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.656 [2024-11-16 16:43:05.392978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:08.656 [2024-11-16 16:43:05.392997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:32704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.656 [2024-11-16 16:43:05.393010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:08.656 [2024-11-16 16:43:05.393028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:32712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.656 [2024-11-16 16:43:05.393048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:08.657 [2024-11-16 16:43:05.393077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:32720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.657 [2024-11-16 16:43:05.393112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:08.657 [2024-11-16 16:43:05.393506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:32728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.657 [2024-11-16 16:43:05.393534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:08.657 [2024-11-16 16:43:05.393562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:32736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.657 [2024-11-16 16:43:05.393579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:08.657 [2024-11-16 16:43:05.393632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:32744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.657 [2024-11-16 16:43:05.393646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:08.657 [2024-11-16 16:43:05.393668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:32752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.657 [2024-11-16 16:43:05.393682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:08.657 [2024-11-16 16:43:05.393704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:32760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.657 [2024-11-16 16:43:05.393718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:08.657 [2024-11-16 16:43:05.393739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:32768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.657 [2024-11-16 16:43:05.393753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:08.657 [2024-11-16 16:43:05.393773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:32776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.657 [2024-11-16 16:43:05.393787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:08.657 [2024-11-16 16:43:05.393807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:32784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.657 [2024-11-16 16:43:05.393821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:08.657 [2024-11-16 16:43:05.393842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.657 [2024-11-16 16:43:05.393855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:08.657 [2024-11-16 16:43:05.393876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:32800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.657 [2024-11-16 16:43:05.393890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:08.657 [2024-11-16 16:43:05.393910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:32808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.657 [2024-11-16 16:43:05.393935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:08.657 [2024-11-16 16:43:05.393957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:32816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.657 [2024-11-16 16:43:05.393971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:08.657 [2024-11-16 16:43:05.393991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:32824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.657 [2024-11-16 16:43:05.394005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:08.657 [2024-11-16 16:43:05.394026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.657 [2024-11-16 16:43:05.394039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:08.657 [2024-11-16 16:43:05.394060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.657 [2024-11-16 16:43:05.394073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:08.657 [2024-11-16 16:43:05.394111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:32848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.657 [2024-11-16 16:43:05.394152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:08.657 [2024-11-16 16:43:05.394177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:32856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.657 [2024-11-16 16:43:05.394192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:08.657 [2024-11-16 16:43:05.394215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:32864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.657 [2024-11-16 16:43:05.394231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:08.657 [2024-11-16 16:43:05.394253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.657 [2024-11-16 16:43:05.394269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:08.657 [2024-11-16 16:43:05.394291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:32304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.657 [2024-11-16 16:43:05.394305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:08.657 [2024-11-16 16:43:05.394327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:32312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.657 [2024-11-16 16:43:05.394342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:08.657 [2024-11-16 16:43:05.394363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.657 [2024-11-16 16:43:05.394378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:08.657 [2024-11-16 16:43:05.394400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:32328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.657 [2024-11-16 16:43:05.394426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:08.657 [2024-11-16 16:43:05.394451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.657 [2024-11-16 16:43:05.394465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:08.657 [2024-11-16 16:43:05.394487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:32344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.657 [2024-11-16 16:43:05.394501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:08.657 [2024-11-16 16:43:05.394523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:32352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.657 [2024-11-16 16:43:05.394537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:08.657 [2024-11-16 16:43:05.394559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:32400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.657 [2024-11-16 16:43:05.394587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:08.657 [2024-11-16 16:43:05.394608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:32880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.657 [2024-11-16 16:43:05.394622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:08.657 [2024-11-16 16:43:05.394642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:32888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.657 [2024-11-16 16:43:05.394656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:08.657 [2024-11-16 16:43:05.394676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:32896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.657 [2024-11-16 16:43:05.394691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:08.657 [2024-11-16 16:43:05.394718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:32904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.657 [2024-11-16 16:43:05.394733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:08.657 [2024-11-16 16:43:05.394753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:32912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.657 [2024-11-16 16:43:05.394766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:08.657 [2024-11-16 16:43:05.394788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:32920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.657 [2024-11-16 16:43:05.394801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:08.657 [2024-11-16 16:43:05.394822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:32928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.657 [2024-11-16 16:43:05.394836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:08.657 [2024-11-16 16:43:05.394856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:32936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.657 [2024-11-16 16:43:05.394870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:08.657 [2024-11-16 16:43:05.394897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:32944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.657 [2024-11-16 16:43:05.394912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:08.657 [2024-11-16 16:43:05.394933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:32952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.657 [2024-11-16 16:43:05.394946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:08.658 [2024-11-16 16:43:05.394967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:32960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.658 [2024-11-16 16:43:05.394981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:08.658 [2024-11-16 16:43:05.395003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:32968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.658 [2024-11-16 16:43:05.395016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:08.658 [2024-11-16 16:43:05.395037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:32976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.658 [2024-11-16 16:43:05.395050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:08.658 [2024-11-16 16:43:05.395072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:32984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.658 [2024-11-16 16:43:05.395097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:08.658 [2024-11-16 16:43:05.395137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:32992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.658 [2024-11-16 16:43:05.395151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:08.658 [2024-11-16 16:43:05.395173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:33000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.658 [2024-11-16 16:43:05.395188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:08.658 [2024-11-16 16:43:05.395210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:33008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.658 [2024-11-16 16:43:05.395225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:08.658 [2024-11-16 16:43:05.395247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:33016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.658 [2024-11-16 16:43:05.395261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:08.658 [2024-11-16 16:43:05.395284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:33024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.658 [2024-11-16 16:43:05.395298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:08.658 [2024-11-16 16:43:05.395325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:33032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.658 [2024-11-16 16:43:05.395340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:08.658 [2024-11-16 16:43:05.395370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.658 [2024-11-16 16:43:05.395385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:08.658 [2024-11-16 16:43:05.395407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:33048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.658 [2024-11-16 16:43:05.395421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:08.658 [2024-11-16 16:43:05.395443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:33056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.658 [2024-11-16 16:43:05.395457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:08.658 [2024-11-16 16:43:05.395479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:33064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.658 [2024-11-16 16:43:05.395494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:08.658 [2024-11-16 16:43:05.395516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:33072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.658 [2024-11-16 16:43:05.395530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:08.658 [2024-11-16 16:43:05.395566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:33080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.658 [2024-11-16 16:43:05.395579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:08.658 [2024-11-16 16:43:05.395600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:33088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.658 [2024-11-16 16:43:05.395614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:08.658 [2024-11-16 16:43:05.395635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:33096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.658 [2024-11-16 16:43:05.395648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:08.658 [2024-11-16 16:43:05.395669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:33104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.658 [2024-11-16 16:43:05.395682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:08.658 [2024-11-16 16:43:05.395702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:33112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.658 [2024-11-16 16:43:05.395716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:08.658 [2024-11-16 16:43:05.395736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:33120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.658 [2024-11-16 16:43:05.395750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:08.658 [2024-11-16 16:43:05.395770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.658 [2024-11-16 16:43:05.395783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:08.658 [2024-11-16 16:43:05.395821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:33136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.658 [2024-11-16 16:43:05.395841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:08.658 [2024-11-16 16:43:05.395864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:33144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.658 [2024-11-16 16:43:05.395878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:08.658 [2024-11-16 16:43:05.395899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.658 [2024-11-16 16:43:05.395913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:08.658 [2024-11-16 16:43:05.395940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:33160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.658 [2024-11-16 16:43:05.395955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:08.658 [2024-11-16 16:43:05.395977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:33168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.658 [2024-11-16 16:43:05.395991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:08.658 [2024-11-16 16:43:05.396012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:33176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.658 [2024-11-16 16:43:05.396026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:08.658 [2024-11-16 16:43:05.396048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:32416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.658 [2024-11-16 16:43:05.396076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:08.658 [2024-11-16 16:43:05.396139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:32424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.658 [2024-11-16 16:43:05.396157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:08.658 [2024-11-16 16:43:05.396179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:32440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.658 [2024-11-16 16:43:05.396211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:08.658 [2024-11-16 16:43:05.396233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:32464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.658 [2024-11-16 16:43:05.396248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:08.658 [2024-11-16 16:43:05.396271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:32488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.658 [2024-11-16 16:43:05.396285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:08.658 [2024-11-16 16:43:05.396437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:32504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.658 [2024-11-16 16:43:05.396457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:08.658 [2024-11-16 16:43:05.396486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:32544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.658 [2024-11-16 16:43:05.396521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:08.658 [2024-11-16 16:43:05.396552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:32552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.658 [2024-11-16 16:43:05.396566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:08.658 [2024-11-16 16:43:05.396593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:33184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.658 [2024-11-16 16:43:05.396608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:08.658 [2024-11-16 16:43:05.396635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:33192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.658 [2024-11-16 16:43:05.396649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:08.659 [2024-11-16 16:43:05.396676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:33200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.659 [2024-11-16 16:43:05.396690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:08.659 [2024-11-16 16:43:05.396717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:33208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.659 [2024-11-16 16:43:05.396731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:08.659 [2024-11-16 16:43:05.396758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:33216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.659 [2024-11-16 16:43:05.396773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:08.659 [2024-11-16 16:43:05.396805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.659 [2024-11-16 16:43:05.396820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:08.659 [2024-11-16 16:43:05.396847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:33232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.659 [2024-11-16 16:43:05.396862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:08.659 [2024-11-16 16:43:05.396888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:33240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.659 [2024-11-16 16:43:05.396903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:08.659 [2024-11-16 16:43:05.396930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:33248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.659 [2024-11-16 16:43:05.396944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:08.659 [2024-11-16 16:43:05.396972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:33256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.659 [2024-11-16 16:43:05.396987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:08.659 [2024-11-16 16:43:12.426029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:77080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.659 [2024-11-16 16:43:12.426103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:08.659 [2024-11-16 16:43:12.426188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:77088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.659 [2024-11-16 16:43:12.426210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:08.659 [2024-11-16 16:43:12.426233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:77096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.659 [2024-11-16 16:43:12.426248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:08.659 [2024-11-16 16:43:12.426268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:77104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.659 [2024-11-16 16:43:12.426283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:08.659 [2024-11-16 16:43:12.426302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:77112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.659 [2024-11-16 16:43:12.426317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:08.659 [2024-11-16 16:43:12.426337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:77120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.659 [2024-11-16 16:43:12.426351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:08.659 [2024-11-16 16:43:12.426371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:77128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.659 [2024-11-16 16:43:12.426385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:08.659 [2024-11-16 16:43:12.426405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:77136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.659 [2024-11-16 16:43:12.426419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:08.659 [2024-11-16 16:43:12.426452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:77144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.659 [2024-11-16 16:43:12.426482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:08.659 [2024-11-16 16:43:12.426500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:77152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.659 [2024-11-16 16:43:12.426514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:08.659 [2024-11-16 16:43:12.426532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:77160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.659 [2024-11-16 16:43:12.426545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:08.659 [2024-11-16 16:43:12.426564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:77168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.659 [2024-11-16 16:43:12.426578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:08.659 [2024-11-16 16:43:12.426596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:77176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.659 [2024-11-16 16:43:12.426609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:08.659 [2024-11-16 16:43:12.426639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:77184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.659 [2024-11-16 16:43:12.426654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:08.659 [2024-11-16 16:43:12.426673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:76488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.659 [2024-11-16 16:43:12.426687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:08.659 [2024-11-16 16:43:12.426706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:76512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.659 [2024-11-16 16:43:12.426720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:08.659 [2024-11-16 16:43:12.426739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:76520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.659 [2024-11-16 16:43:12.426752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:08.659 [2024-11-16 16:43:12.426772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:76528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.659 [2024-11-16 16:43:12.426786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:08.659 [2024-11-16 16:43:12.426804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:76560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.659 [2024-11-16 16:43:12.426817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:08.659 [2024-11-16 16:43:12.426835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:76568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.659 [2024-11-16 16:43:12.426848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:08.659 [2024-11-16 16:43:12.426866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:76576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.659 [2024-11-16 16:43:12.426879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:08.659 [2024-11-16 16:43:12.426898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:76592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.659 [2024-11-16 16:43:12.426912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:08.659 [2024-11-16 16:43:12.426931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:76600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.659 [2024-11-16 16:43:12.426945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:08.659 [2024-11-16 16:43:12.426963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:76608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.659 [2024-11-16 16:43:12.426977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:08.659 [2024-11-16 16:43:12.427007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:76616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.659 [2024-11-16 16:43:12.427033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:08.659 [2024-11-16 16:43:12.427061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:76624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.659 [2024-11-16 16:43:12.427120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:08.659 [2024-11-16 16:43:12.427143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:76632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.659 [2024-11-16 16:43:12.427158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:08.659 [2024-11-16 16:43:12.427178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:76648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.659 [2024-11-16 16:43:12.427193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:08.659 [2024-11-16 16:43:12.427213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:76664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.659 [2024-11-16 16:43:12.427227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:08.659 [2024-11-16 16:43:12.427246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:76672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.659 [2024-11-16 16:43:12.427261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:08.659 [2024-11-16 16:43:12.427281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:77192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.660 [2024-11-16 16:43:12.427296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:08.660 [2024-11-16 16:43:12.427315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.660 [2024-11-16 16:43:12.427330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:08.660 [2024-11-16 16:43:12.427349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:77208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.660 [2024-11-16 16:43:12.427363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:08.660 [2024-11-16 16:43:12.427383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:77216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.660 [2024-11-16 16:43:12.427397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:08.660 [2024-11-16 16:43:12.427416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:77224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.660 [2024-11-16 16:43:12.427445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:08.660 [2024-11-16 16:43:12.427479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:77232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.660 [2024-11-16 16:43:12.427492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:08.660 [2024-11-16 16:43:12.427510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:77240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.660 [2024-11-16 16:43:12.427524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:08.660 [2024-11-16 16:43:12.427542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:77248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.660 [2024-11-16 16:43:12.427565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:08.660 [2024-11-16 16:43:12.427584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:77256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.660 [2024-11-16 16:43:12.427598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:08.660 [2024-11-16 16:43:12.428019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:77264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.660 [2024-11-16 16:43:12.428044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:08.660 [2024-11-16 16:43:12.428093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:77272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.660 [2024-11-16 16:43:12.428135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:08.660 [2024-11-16 16:43:12.428163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:77280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.660 [2024-11-16 16:43:12.428178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:08.660 [2024-11-16 16:43:12.428201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:77288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.660 [2024-11-16 16:43:12.428216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:08.660 [2024-11-16 16:43:12.428239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:77296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.660 [2024-11-16 16:43:12.428254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:08.660 [2024-11-16 16:43:12.428277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:77304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.660 [2024-11-16 16:43:12.428291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:08.660 [2024-11-16 16:43:12.428314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:77312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.660 [2024-11-16 16:43:12.428329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:08.660 [2024-11-16 16:43:12.428353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.660 [2024-11-16 16:43:12.428368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:08.660 [2024-11-16 16:43:12.428391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:76688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.660 [2024-11-16 16:43:12.428405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:08.660 [2024-11-16 16:43:12.428443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:76712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.660 [2024-11-16 16:43:12.428472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:08.660 [2024-11-16 16:43:12.428493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:76744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.660 [2024-11-16 16:43:12.428506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:08.660 [2024-11-16 16:43:12.428540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:76760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.660 [2024-11-16 16:43:12.428555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:08.660 [2024-11-16 16:43:12.428577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:76768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.660 [2024-11-16 16:43:12.428590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:08.660 [2024-11-16 16:43:12.428611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:76776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.660 [2024-11-16 16:43:12.428625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:08.660 [2024-11-16 16:43:12.428647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.660 [2024-11-16 16:43:12.428661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:08.660 [2024-11-16 16:43:12.428683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:76832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.660 [2024-11-16 16:43:12.428696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:08.660 [2024-11-16 16:43:12.428717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:76848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.660 [2024-11-16 16:43:12.428731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:08.660 [2024-11-16 16:43:12.428753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:76856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.660 [2024-11-16 16:43:12.428766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:08.660 [2024-11-16 16:43:12.428787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:76864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.660 [2024-11-16 16:43:12.428801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:08.660 [2024-11-16 16:43:12.428822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:76872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.660 [2024-11-16 16:43:12.428836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:08.660 [2024-11-16 16:43:12.428857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:76888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.660 [2024-11-16 16:43:12.428871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:08.660 [2024-11-16 16:43:12.428893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:76904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.660 [2024-11-16 16:43:12.428907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:08.660 [2024-11-16 16:43:12.428928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:76928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.660 [2024-11-16 16:43:12.428941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:08.660 [2024-11-16 16:43:12.428972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:76936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.660 [2024-11-16 16:43:12.428986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:08.660 [2024-11-16 16:43:12.429008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:77328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.660 [2024-11-16 16:43:12.429022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:08.660 [2024-11-16 16:43:12.429043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:77336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.660 [2024-11-16 16:43:12.429057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:08.660 [2024-11-16 16:43:12.429094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:77344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.660 [2024-11-16 16:43:12.429124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:08.660 [2024-11-16 16:43:12.429148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:77352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.660 [2024-11-16 16:43:12.429163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:08.660 [2024-11-16 16:43:12.429226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.660 [2024-11-16 16:43:12.429244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:08.660 [2024-11-16 16:43:12.429269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.660 [2024-11-16 16:43:12.429284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:08.661 [2024-11-16 16:43:12.429308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:77376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.661 [2024-11-16 16:43:12.429323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:08.661 [2024-11-16 16:43:12.429347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:77384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.661 [2024-11-16 16:43:12.429362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:08.661 [2024-11-16 16:43:12.429386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:77392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.661 [2024-11-16 16:43:12.429401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:08.661 [2024-11-16 16:43:12.429425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.661 [2024-11-16 16:43:12.429454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:08.661 [2024-11-16 16:43:12.429477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:77408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.661 [2024-11-16 16:43:12.429492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:08.661 [2024-11-16 16:43:12.429523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:77416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.661 [2024-11-16 16:43:12.429553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:08.661 [2024-11-16 16:43:12.429576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:77424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.661 [2024-11-16 16:43:12.429590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:08.661 [2024-11-16 16:43:12.429628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:77432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.661 [2024-11-16 16:43:12.429642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:08.661 [2024-11-16 16:43:12.429664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:77440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.661 [2024-11-16 16:43:12.429677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:08.661 [2024-11-16 16:43:12.429699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:77448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.661 [2024-11-16 16:43:12.429712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:08.661 [2024-11-16 16:43:12.429734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:77456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.661 [2024-11-16 16:43:12.429748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:08.661 [2024-11-16 16:43:12.429770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:77464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.661 [2024-11-16 16:43:12.429784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:08.661 [2024-11-16 16:43:12.429805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:77472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.661 [2024-11-16 16:43:12.429819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:08.661 [2024-11-16 16:43:12.429941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:77480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.661 [2024-11-16 16:43:12.429963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:08.661 [2024-11-16 16:43:12.429991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:76960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.661 [2024-11-16 16:43:12.430005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:08.661 [2024-11-16 16:43:12.430029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:76968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.661 [2024-11-16 16:43:12.430043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:08.661 [2024-11-16 16:43:12.430068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:76984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.661 [2024-11-16 16:43:12.430083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:08.661 [2024-11-16 16:43:12.430124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:77000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.661 [2024-11-16 16:43:12.430160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:08.661 [2024-11-16 16:43:12.430189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:77008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.661 [2024-11-16 16:43:12.430204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:08.661 [2024-11-16 16:43:12.430229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:77016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.661 [2024-11-16 16:43:12.430243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:08.661 [2024-11-16 16:43:12.430268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:77040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.661 [2024-11-16 16:43:12.430282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:08.661 [2024-11-16 16:43:12.430307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:77056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.661 [2024-11-16 16:43:12.430321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:08.661 [2024-11-16 16:43:12.430346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:77488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.661 [2024-11-16 16:43:12.430360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:08.661 [2024-11-16 16:43:12.430385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:77496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.661 [2024-11-16 16:43:12.430399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:08.661 [2024-11-16 16:43:12.430424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:77504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.661 [2024-11-16 16:43:12.430438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:08.661 [2024-11-16 16:43:12.430463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:77512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.661 [2024-11-16 16:43:12.430477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:08.661 [2024-11-16 16:43:12.430502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:77520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.661 [2024-11-16 16:43:12.430516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:08.661 [2024-11-16 16:43:12.430540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:77528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.661 [2024-11-16 16:43:12.430554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:08.661 [2024-11-16 16:43:12.430579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:77536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.661 [2024-11-16 16:43:12.430593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:08.661 [2024-11-16 16:43:12.430626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:77544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.661 [2024-11-16 16:43:12.430648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.661 [2024-11-16 16:43:12.430674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:77552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.661 [2024-11-16 16:43:12.430688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:08.661 [2024-11-16 16:43:12.430713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:77560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.661 [2024-11-16 16:43:12.430728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:08.661 [2024-11-16 16:43:12.430753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:77568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.661 [2024-11-16 16:43:12.430767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:08.661 [2024-11-16 16:43:12.430792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:77576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.661 [2024-11-16 16:43:12.430806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:08.661 [2024-11-16 16:43:12.430831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:77584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.662 [2024-11-16 16:43:12.430845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:08.662 [2024-11-16 16:43:12.430870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:77592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.662 [2024-11-16 16:43:12.430884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:08.662 [2024-11-16 16:43:12.430909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:77600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.662 [2024-11-16 16:43:12.430923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:08.662 [2024-11-16 16:43:12.430948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:77608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.662 [2024-11-16 16:43:12.430962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:08.662 [2024-11-16 16:43:12.430987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:77616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.662 [2024-11-16 16:43:12.431001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:08.662 [2024-11-16 16:43:12.431026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:77624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.662 [2024-11-16 16:43:12.431040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:08.662 [2024-11-16 16:43:12.431077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:77632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.662 [2024-11-16 16:43:12.431094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:08.662 [2024-11-16 16:43:12.431120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:77640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.662 [2024-11-16 16:43:12.431134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:08.662 [2024-11-16 16:43:12.431167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:77648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.662 [2024-11-16 16:43:12.431183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:08.662 [2024-11-16 16:43:12.431208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:77656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.662 [2024-11-16 16:43:12.431222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:08.662 [2024-11-16 16:43:25.729398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.662 [2024-11-16 16:43:25.729487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.662 [2024-11-16 16:43:25.729531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:11840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.662 [2024-11-16 16:43:25.729546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.662 [2024-11-16 16:43:25.729561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:11848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.662 [2024-11-16 16:43:25.729589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.662 [2024-11-16 16:43:25.729604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:11864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.662 [2024-11-16 16:43:25.729617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.662 [2024-11-16 16:43:25.729630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:11320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.662 [2024-11-16 16:43:25.729643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.662 [2024-11-16 16:43:25.729656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:11336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.662 [2024-11-16 16:43:25.729669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.662 [2024-11-16 16:43:25.729683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:11352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.662 [2024-11-16 16:43:25.729696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.662 [2024-11-16 16:43:25.729710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:11376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.662 [2024-11-16 16:43:25.729722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.662 [2024-11-16 16:43:25.729736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:11384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.662 [2024-11-16 16:43:25.729749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.662 [2024-11-16 16:43:25.729762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.662 [2024-11-16 16:43:25.729776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.662 [2024-11-16 16:43:25.729789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:11448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.662 [2024-11-16 16:43:25.729823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.662 [2024-11-16 16:43:25.729839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:11456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.662 [2024-11-16 16:43:25.729852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.662 [2024-11-16 16:43:25.729866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:11872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.662 [2024-11-16 16:43:25.729878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.662 [2024-11-16 16:43:25.729891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:11880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.662 [2024-11-16 16:43:25.729904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.662 [2024-11-16 16:43:25.729917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.662 [2024-11-16 16:43:25.729930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.662 [2024-11-16 16:43:25.729943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:11896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.662 [2024-11-16 16:43:25.729956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.662 [2024-11-16 16:43:25.729969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:11904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.662 [2024-11-16 16:43:25.729982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.662 [2024-11-16 16:43:25.729996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:11912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.662 [2024-11-16 16:43:25.730009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.662 [2024-11-16 16:43:25.730023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.662 [2024-11-16 16:43:25.730035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.662 [2024-11-16 16:43:25.730049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.662 [2024-11-16 16:43:25.730083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.662 [2024-11-16 16:43:25.730126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:11936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.662 [2024-11-16 16:43:25.730143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.662 [2024-11-16 16:43:25.730157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:11944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.662 [2024-11-16 16:43:25.730191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.662 [2024-11-16 16:43:25.730207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:11952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.662 [2024-11-16 16:43:25.730221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.662 [2024-11-16 16:43:25.730248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:11960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.662 [2024-11-16 16:43:25.730262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.662 [2024-11-16 16:43:25.730278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:11968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.662 [2024-11-16 16:43:25.730292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.662 [2024-11-16 16:43:25.730307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:11976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.662 [2024-11-16 16:43:25.730320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.662 [2024-11-16 16:43:25.730336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:11984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.662 [2024-11-16 16:43:25.730349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.662 [2024-11-16 16:43:25.730365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.662 [2024-11-16 16:43:25.730378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.662 [2024-11-16 16:43:25.730409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:12000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.662 [2024-11-16 16:43:25.730422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.662 [2024-11-16 16:43:25.730437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:12008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.663 [2024-11-16 16:43:25.730450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.663 [2024-11-16 16:43:25.730480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.663 [2024-11-16 16:43:25.730493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.663 [2024-11-16 16:43:25.730507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:12024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.663 [2024-11-16 16:43:25.730520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.663 [2024-11-16 16:43:25.730534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:12032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.663 [2024-11-16 16:43:25.730548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.663 [2024-11-16 16:43:25.730563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.663 [2024-11-16 16:43:25.730576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.663 [2024-11-16 16:43:25.730590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:12048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.663 [2024-11-16 16:43:25.730604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.663 [2024-11-16 16:43:25.730618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.663 [2024-11-16 16:43:25.730637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.663 [2024-11-16 16:43:25.730653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:12064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.663 [2024-11-16 16:43:25.730666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.663 [2024-11-16 16:43:25.730681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.663 [2024-11-16 16:43:25.730694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.663 [2024-11-16 16:43:25.730709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:12080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.663 [2024-11-16 16:43:25.730734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.663 [2024-11-16 16:43:25.730749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:12088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.663 [2024-11-16 16:43:25.730762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.663 [2024-11-16 16:43:25.730777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.663 [2024-11-16 16:43:25.730789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.663 [2024-11-16 16:43:25.730803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:12104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.663 [2024-11-16 16:43:25.730816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.663 [2024-11-16 16:43:25.730830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:12112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.663 [2024-11-16 16:43:25.730843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.663 [2024-11-16 16:43:25.730857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:12120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.663 [2024-11-16 16:43:25.730870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.663 [2024-11-16 16:43:25.730884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.663 [2024-11-16 16:43:25.730896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.663 [2024-11-16 16:43:25.730912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:12136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.663 [2024-11-16 16:43:25.730925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.663 [2024-11-16 16:43:25.730939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:12144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.663 [2024-11-16 16:43:25.730952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.663 [2024-11-16 16:43:25.730966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:11464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.663 [2024-11-16 16:43:25.730979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.663 [2024-11-16 16:43:25.730993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:11480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.663 [2024-11-16 16:43:25.731012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.663 [2024-11-16 16:43:25.731028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:11488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.663 [2024-11-16 16:43:25.731041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.663 [2024-11-16 16:43:25.731056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:11496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.663 [2024-11-16 16:43:25.731111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.663 [2024-11-16 16:43:25.731142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:11504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.663 [2024-11-16 16:43:25.731158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.663 [2024-11-16 16:43:25.731174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:11520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.663 [2024-11-16 16:43:25.731187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.663 [2024-11-16 16:43:25.731203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:11544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.663 [2024-11-16 16:43:25.731217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.663 [2024-11-16 16:43:25.731232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.663 [2024-11-16 16:43:25.731246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.663 [2024-11-16 16:43:25.731262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:12152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.663 [2024-11-16 16:43:25.731276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.663 [2024-11-16 16:43:25.731291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:12160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.663 [2024-11-16 16:43:25.731305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.663 [2024-11-16 16:43:25.731320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:12168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.663 [2024-11-16 16:43:25.731334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.663 [2024-11-16 16:43:25.731349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:12176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.663 [2024-11-16 16:43:25.731363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.663 [2024-11-16 16:43:25.731378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.663 [2024-11-16 16:43:25.731401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.663 [2024-11-16 16:43:25.731446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:12192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.663 [2024-11-16 16:43:25.731459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.663 [2024-11-16 16:43:25.731496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:12200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.663 [2024-11-16 16:43:25.731510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.663 [2024-11-16 16:43:25.731524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:12208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.663 [2024-11-16 16:43:25.731536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.663 [2024-11-16 16:43:25.731550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:12216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.663 [2024-11-16 16:43:25.731563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.663 [2024-11-16 16:43:25.731577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.663 [2024-11-16 16:43:25.731591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.663 [2024-11-16 16:43:25.731605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.663 [2024-11-16 16:43:25.731618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.663 [2024-11-16 16:43:25.731632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.663 [2024-11-16 16:43:25.731645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.663 [2024-11-16 16:43:25.731659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:12248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.663 [2024-11-16 16:43:25.731671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.663 [2024-11-16 16:43:25.731685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:12256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.663 [2024-11-16 16:43:25.731698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.663 [2024-11-16 16:43:25.731712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.664 [2024-11-16 16:43:25.731725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.664 [2024-11-16 16:43:25.731739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:11568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.664 [2024-11-16 16:43:25.731751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.664 [2024-11-16 16:43:25.731765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.664 [2024-11-16 16:43:25.731778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.664 [2024-11-16 16:43:25.731792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:11592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.664 [2024-11-16 16:43:25.731804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.664 [2024-11-16 16:43:25.731819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:11600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.664 [2024-11-16 16:43:25.731836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.664 [2024-11-16 16:43:25.731851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:11608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.664 [2024-11-16 16:43:25.731864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.664 [2024-11-16 16:43:25.731878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.664 [2024-11-16 16:43:25.731894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.664 [2024-11-16 16:43:25.731909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:11624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.664 [2024-11-16 16:43:25.731922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.664 [2024-11-16 16:43:25.731936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:11640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.664 [2024-11-16 16:43:25.731953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.664 [2024-11-16 16:43:25.731968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:11680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.664 [2024-11-16 16:43:25.731981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.664 [2024-11-16 16:43:25.731995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:11688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.664 [2024-11-16 16:43:25.732008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.664 [2024-11-16 16:43:25.732022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:11704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.664 [2024-11-16 16:43:25.732035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.664 [2024-11-16 16:43:25.732049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.664 [2024-11-16 16:43:25.732099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.664 [2024-11-16 16:43:25.732138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:11744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.664 [2024-11-16 16:43:25.732158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.664 [2024-11-16 16:43:25.732174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:11760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.664 [2024-11-16 16:43:25.732188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.664 [2024-11-16 16:43:25.732203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:11784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.664 [2024-11-16 16:43:25.732217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.664 [2024-11-16 16:43:25.732235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:11792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.664 [2024-11-16 16:43:25.732249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.664 [2024-11-16 16:43:25.732273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:12272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.664 [2024-11-16 16:43:25.732287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.664 [2024-11-16 16:43:25.732302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:12280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.664 [2024-11-16 16:43:25.732316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.664 [2024-11-16 16:43:25.732331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:12288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.664 [2024-11-16 16:43:25.732345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.664 [2024-11-16 16:43:25.732360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:12296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.664 [2024-11-16 16:43:25.732374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.664 [2024-11-16 16:43:25.732389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:12304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.664 [2024-11-16 16:43:25.732402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.664 [2024-11-16 16:43:25.732417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:12312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.664 [2024-11-16 16:43:25.732436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.664 [2024-11-16 16:43:25.732482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:12320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.664 [2024-11-16 16:43:25.732495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.664 [2024-11-16 16:43:25.732509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.664 [2024-11-16 16:43:25.732526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.664 [2024-11-16 16:43:25.732540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:12336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.664 [2024-11-16 16:43:25.732553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.664 [2024-11-16 16:43:25.732567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:12344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.664 [2024-11-16 16:43:25.732580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.664 [2024-11-16 16:43:25.732594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:12352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.664 [2024-11-16 16:43:25.732607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.664 [2024-11-16 16:43:25.732621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.664 [2024-11-16 16:43:25.732633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.664 [2024-11-16 16:43:25.732648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:12368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.664 [2024-11-16 16:43:25.732666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.664 [2024-11-16 16:43:25.732681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:12376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.664 [2024-11-16 16:43:25.732693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.664 [2024-11-16 16:43:25.732707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:12384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.664 [2024-11-16 16:43:25.732720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.664 [2024-11-16 16:43:25.732734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:12392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.664 [2024-11-16 16:43:25.732747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.664 [2024-11-16 16:43:25.732761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:12400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.664 [2024-11-16 16:43:25.732773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.664 [2024-11-16 16:43:25.732788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.664 [2024-11-16 16:43:25.732800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.664 [2024-11-16 16:43:25.732814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.664 [2024-11-16 16:43:25.732826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.664 [2024-11-16 16:43:25.732840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.664 [2024-11-16 16:43:25.732853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.664 [2024-11-16 16:43:25.732867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:12432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.664 [2024-11-16 16:43:25.732879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.664 [2024-11-16 16:43:25.732894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:12440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.664 [2024-11-16 16:43:25.732909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.664 [2024-11-16 16:43:25.732923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.664 [2024-11-16 16:43:25.732936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.665 [2024-11-16 16:43:25.732950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.665 [2024-11-16 16:43:25.732966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.665 [2024-11-16 16:43:25.732980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.665 [2024-11-16 16:43:25.732993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.665 [2024-11-16 16:43:25.733007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:12472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.665 [2024-11-16 16:43:25.733025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.665 [2024-11-16 16:43:25.733040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:12480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.665 [2024-11-16 16:43:25.733052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.665 [2024-11-16 16:43:25.733066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:12488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.665 [2024-11-16 16:43:25.733081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.665 [2024-11-16 16:43:25.733115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.665 [2024-11-16 16:43:25.733129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.665 [2024-11-16 16:43:25.733143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:12504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.665 [2024-11-16 16:43:25.733156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.665 [2024-11-16 16:43:25.733170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.665 [2024-11-16 16:43:25.733208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.665 [2024-11-16 16:43:25.733224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.665 [2024-11-16 16:43:25.733237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.665 [2024-11-16 16:43:25.733252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:12528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.665 [2024-11-16 16:43:25.733265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.665 [2024-11-16 16:43:25.733279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:12536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.665 [2024-11-16 16:43:25.733293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.665 [2024-11-16 16:43:25.733307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:12544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.665 [2024-11-16 16:43:25.733321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.665 [2024-11-16 16:43:25.733335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:12552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.665 [2024-11-16 16:43:25.733349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.665 [2024-11-16 16:43:25.733364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:12560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.665 [2024-11-16 16:43:25.733377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.665 [2024-11-16 16:43:25.733391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:12568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:08.665 [2024-11-16 16:43:25.733409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.665 [2024-11-16 16:43:25.733433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:12576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.665 [2024-11-16 16:43:25.733447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.665 [2024-11-16 16:43:25.733462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:11816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.665 [2024-11-16 16:43:25.733480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.665 [2024-11-16 16:43:25.733495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:11832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.665 [2024-11-16 16:43:25.733523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.665 [2024-11-16 16:43:25.733563] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:08.665 [2024-11-16 16:43:25.733577] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:08.665 [2024-11-16 16:43:25.733588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11856 len:8 PRP1 0x0 PRP2 0x0 00:24:08.665 [2024-11-16 16:43:25.733615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.665 [2024-11-16 16:43:25.733673] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1bac060 was disconnected and freed. reset controller. 00:24:08.665 [2024-11-16 16:43:25.733775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:08.665 [2024-11-16 16:43:25.733799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.665 [2024-11-16 16:43:25.733814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:08.665 [2024-11-16 16:43:25.733827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.665 [2024-11-16 16:43:25.733839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:08.665 [2024-11-16 16:43:25.733852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.665 [2024-11-16 16:43:25.733865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:08.665 [2024-11-16 16:43:25.733877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.665 [2024-11-16 16:43:25.733890] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbda00 is same with the state(5) to be set 00:24:08.665 [2024-11-16 16:43:25.735049] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.665 [2024-11-16 16:43:25.735113] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bbda00 (9): Bad file descriptor 00:24:08.665 [2024-11-16 16:43:25.735229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.665 [2024-11-16 16:43:25.735285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.665 [2024-11-16 16:43:25.735307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bbda00 with addr=10.0.0.2, port=4421 00:24:08.665 [2024-11-16 16:43:25.735322] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbda00 is same with the state(5) to be set 00:24:08.665 [2024-11-16 16:43:25.735557] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bbda00 (9): Bad file descriptor 00:24:08.665 [2024-11-16 16:43:25.735664] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.665 [2024-11-16 16:43:25.735689] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.665 [2024-11-16 16:43:25.735703] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.665 [2024-11-16 16:43:25.735746] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.665 [2024-11-16 16:43:25.735767] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.665 [2024-11-16 16:43:35.795686] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:08.665 Received shutdown signal, test time was about 55.054039 seconds 00:24:08.665 00:24:08.665 Latency(us) 00:24:08.665 [2024-11-16T16:43:46.156Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:08.665 [2024-11-16T16:43:46.156Z] Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:08.665 Verification LBA range: start 0x0 length 0x4000 00:24:08.665 Nvme0n1 : 55.05 12562.73 49.07 0.00 0.00 10173.21 1176.67 7015926.69 00:24:08.665 [2024-11-16T16:43:46.156Z] =================================================================================================================== 00:24:08.665 [2024-11-16T16:43:46.156Z] Total : 12562.73 49.07 0.00 0.00 10173.21 1176.67 7015926.69 00:24:08.665 16:43:46 -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:08.924 16:43:46 -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:24:08.924 16:43:46 -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:08.924 16:43:46 -- host/multipath.sh@125 -- # nvmftestfini 00:24:08.924 16:43:46 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:08.924 16:43:46 -- nvmf/common.sh@116 -- # sync 00:24:08.924 16:43:46 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:08.924 16:43:46 -- nvmf/common.sh@119 -- # set +e 00:24:08.924 16:43:46 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:08.924 16:43:46 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:08.924 rmmod nvme_tcp 00:24:08.924 rmmod nvme_fabrics 00:24:08.924 rmmod nvme_keyring 00:24:08.924 16:43:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:08.924 16:43:46 -- nvmf/common.sh@123 -- # set -e 00:24:08.924 16:43:46 -- nvmf/common.sh@124 -- # return 0 00:24:08.924 16:43:46 -- nvmf/common.sh@477 -- # '[' -n 99148 ']' 00:24:08.924 16:43:46 -- nvmf/common.sh@478 -- # killprocess 99148 00:24:08.924 16:43:46 -- common/autotest_common.sh@936 -- # '[' -z 99148 ']' 00:24:08.924 16:43:46 -- common/autotest_common.sh@940 -- # kill -0 99148 00:24:08.924 16:43:46 -- common/autotest_common.sh@941 -- # uname 00:24:08.924 16:43:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:08.924 16:43:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 99148 00:24:08.924 16:43:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:08.924 16:43:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:08.924 killing process with pid 99148 00:24:08.924 16:43:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 99148' 00:24:08.924 16:43:46 -- common/autotest_common.sh@955 -- # kill 99148 00:24:08.924 16:43:46 -- common/autotest_common.sh@960 -- # wait 99148 00:24:09.183 16:43:46 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:09.183 16:43:46 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:09.183 16:43:46 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:09.183 16:43:46 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:09.183 16:43:46 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:09.183 16:43:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:09.183 16:43:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:09.183 16:43:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:09.441 16:43:46 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:24:09.441 00:24:09.441 real 1m1.206s 00:24:09.441 user 2m49.198s 00:24:09.441 sys 0m15.750s 00:24:09.441 16:43:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:09.441 16:43:46 -- common/autotest_common.sh@10 -- # set +x 00:24:09.441 ************************************ 00:24:09.441 END TEST nvmf_multipath 00:24:09.441 ************************************ 00:24:09.441 16:43:46 -- nvmf/nvmf.sh@117 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:24:09.441 16:43:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:09.441 16:43:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:09.441 16:43:46 -- common/autotest_common.sh@10 -- # set +x 00:24:09.441 ************************************ 00:24:09.441 START TEST nvmf_timeout 00:24:09.441 ************************************ 00:24:09.441 16:43:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:24:09.441 * Looking for test storage... 00:24:09.442 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:09.442 16:43:46 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:09.442 16:43:46 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:09.442 16:43:46 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:09.442 16:43:46 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:09.442 16:43:46 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:09.442 16:43:46 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:09.442 16:43:46 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:09.442 16:43:46 -- scripts/common.sh@335 -- # IFS=.-: 00:24:09.442 16:43:46 -- scripts/common.sh@335 -- # read -ra ver1 00:24:09.442 16:43:46 -- scripts/common.sh@336 -- # IFS=.-: 00:24:09.442 16:43:46 -- scripts/common.sh@336 -- # read -ra ver2 00:24:09.442 16:43:46 -- scripts/common.sh@337 -- # local 'op=<' 00:24:09.442 16:43:46 -- scripts/common.sh@339 -- # ver1_l=2 00:24:09.442 16:43:46 -- scripts/common.sh@340 -- # ver2_l=1 00:24:09.442 16:43:46 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:09.442 16:43:46 -- scripts/common.sh@343 -- # case "$op" in 00:24:09.442 16:43:46 -- scripts/common.sh@344 -- # : 1 00:24:09.442 16:43:46 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:09.442 16:43:46 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:09.442 16:43:46 -- scripts/common.sh@364 -- # decimal 1 00:24:09.442 16:43:46 -- scripts/common.sh@352 -- # local d=1 00:24:09.442 16:43:46 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:09.442 16:43:46 -- scripts/common.sh@354 -- # echo 1 00:24:09.442 16:43:46 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:09.442 16:43:46 -- scripts/common.sh@365 -- # decimal 2 00:24:09.442 16:43:46 -- scripts/common.sh@352 -- # local d=2 00:24:09.442 16:43:46 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:09.442 16:43:46 -- scripts/common.sh@354 -- # echo 2 00:24:09.442 16:43:46 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:09.442 16:43:46 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:09.442 16:43:46 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:09.442 16:43:46 -- scripts/common.sh@367 -- # return 0 00:24:09.442 16:43:46 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:09.442 16:43:46 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:09.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:09.442 --rc genhtml_branch_coverage=1 00:24:09.442 --rc genhtml_function_coverage=1 00:24:09.442 --rc genhtml_legend=1 00:24:09.442 --rc geninfo_all_blocks=1 00:24:09.442 --rc geninfo_unexecuted_blocks=1 00:24:09.442 00:24:09.442 ' 00:24:09.442 16:43:46 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:09.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:09.442 --rc genhtml_branch_coverage=1 00:24:09.442 --rc genhtml_function_coverage=1 00:24:09.442 --rc genhtml_legend=1 00:24:09.442 --rc geninfo_all_blocks=1 00:24:09.442 --rc geninfo_unexecuted_blocks=1 00:24:09.442 00:24:09.442 ' 00:24:09.442 16:43:46 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:09.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:09.442 --rc genhtml_branch_coverage=1 00:24:09.442 --rc genhtml_function_coverage=1 00:24:09.442 --rc genhtml_legend=1 00:24:09.442 --rc geninfo_all_blocks=1 00:24:09.442 --rc geninfo_unexecuted_blocks=1 00:24:09.442 00:24:09.442 ' 00:24:09.442 16:43:46 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:09.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:09.442 --rc genhtml_branch_coverage=1 00:24:09.442 --rc genhtml_function_coverage=1 00:24:09.442 --rc genhtml_legend=1 00:24:09.442 --rc geninfo_all_blocks=1 00:24:09.442 --rc geninfo_unexecuted_blocks=1 00:24:09.442 00:24:09.442 ' 00:24:09.442 16:43:46 -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:09.442 16:43:46 -- nvmf/common.sh@7 -- # uname -s 00:24:09.442 16:43:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:09.442 16:43:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:09.442 16:43:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:09.442 16:43:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:09.442 16:43:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:09.442 16:43:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:09.442 16:43:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:09.442 16:43:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:09.442 16:43:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:09.442 16:43:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:09.700 16:43:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:24:09.700 16:43:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:24:09.700 16:43:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:09.700 16:43:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:09.700 16:43:46 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:09.700 16:43:46 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:09.700 16:43:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:09.700 16:43:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:09.700 16:43:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:09.701 16:43:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.701 16:43:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.701 16:43:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.701 16:43:46 -- paths/export.sh@5 -- # export PATH 00:24:09.701 16:43:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.701 16:43:46 -- nvmf/common.sh@46 -- # : 0 00:24:09.701 16:43:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:09.701 16:43:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:09.701 16:43:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:09.701 16:43:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:09.701 16:43:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:09.701 16:43:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:09.701 16:43:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:09.701 16:43:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:09.701 16:43:46 -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:09.701 16:43:46 -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:09.701 16:43:46 -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:09.701 16:43:46 -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:24:09.701 16:43:46 -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:09.701 16:43:46 -- host/timeout.sh@19 -- # nvmftestinit 00:24:09.701 16:43:46 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:09.701 16:43:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:09.701 16:43:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:09.701 16:43:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:09.701 16:43:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:09.701 16:43:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:09.701 16:43:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:09.701 16:43:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:09.701 16:43:46 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:24:09.701 16:43:46 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:24:09.701 16:43:46 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:24:09.701 16:43:46 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:24:09.701 16:43:46 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:24:09.701 16:43:46 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:24:09.701 16:43:46 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:09.701 16:43:46 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:09.701 16:43:46 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:09.701 16:43:46 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:24:09.701 16:43:46 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:09.701 16:43:46 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:09.701 16:43:46 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:09.701 16:43:46 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:09.701 16:43:46 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:09.701 16:43:46 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:09.701 16:43:46 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:09.701 16:43:46 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:09.701 16:43:46 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:24:09.701 16:43:46 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:24:09.701 Cannot find device "nvmf_tgt_br" 00:24:09.701 16:43:46 -- nvmf/common.sh@154 -- # true 00:24:09.701 16:43:46 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:24:09.701 Cannot find device "nvmf_tgt_br2" 00:24:09.701 16:43:46 -- nvmf/common.sh@155 -- # true 00:24:09.701 16:43:46 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:24:09.701 16:43:46 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:24:09.701 Cannot find device "nvmf_tgt_br" 00:24:09.701 16:43:47 -- nvmf/common.sh@157 -- # true 00:24:09.701 16:43:47 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:24:09.701 Cannot find device "nvmf_tgt_br2" 00:24:09.701 16:43:47 -- nvmf/common.sh@158 -- # true 00:24:09.701 16:43:47 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:24:09.701 16:43:47 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:24:09.701 16:43:47 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:09.701 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:09.701 16:43:47 -- nvmf/common.sh@161 -- # true 00:24:09.701 16:43:47 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:09.701 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:09.701 16:43:47 -- nvmf/common.sh@162 -- # true 00:24:09.701 16:43:47 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:24:09.701 16:43:47 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:09.701 16:43:47 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:09.701 16:43:47 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:09.701 16:43:47 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:09.701 16:43:47 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:09.701 16:43:47 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:09.701 16:43:47 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:09.701 16:43:47 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:09.701 16:43:47 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:24:09.701 16:43:47 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:24:09.701 16:43:47 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:24:09.701 16:43:47 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:24:09.701 16:43:47 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:09.701 16:43:47 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:09.701 16:43:47 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:09.701 16:43:47 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:24:09.701 16:43:47 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:24:09.960 16:43:47 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:24:09.960 16:43:47 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:09.960 16:43:47 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:09.960 16:43:47 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:09.960 16:43:47 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:09.960 16:43:47 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:24:09.960 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:09.960 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:24:09.960 00:24:09.960 --- 10.0.0.2 ping statistics --- 00:24:09.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:09.960 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:24:09.960 16:43:47 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:24:09.960 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:09.960 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:24:09.960 00:24:09.960 --- 10.0.0.3 ping statistics --- 00:24:09.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:09.960 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:24:09.960 16:43:47 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:09.960 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:09.960 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:24:09.960 00:24:09.960 --- 10.0.0.1 ping statistics --- 00:24:09.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:09.960 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:24:09.960 16:43:47 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:09.960 16:43:47 -- nvmf/common.sh@421 -- # return 0 00:24:09.960 16:43:47 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:09.960 16:43:47 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:09.960 16:43:47 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:09.960 16:43:47 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:09.960 16:43:47 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:09.960 16:43:47 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:09.960 16:43:47 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:09.960 16:43:47 -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:24:09.960 16:43:47 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:09.960 16:43:47 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:09.960 16:43:47 -- common/autotest_common.sh@10 -- # set +x 00:24:09.960 16:43:47 -- nvmf/common.sh@469 -- # nvmfpid=100524 00:24:09.960 16:43:47 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:09.960 16:43:47 -- nvmf/common.sh@470 -- # waitforlisten 100524 00:24:09.960 16:43:47 -- common/autotest_common.sh@829 -- # '[' -z 100524 ']' 00:24:09.960 16:43:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:09.960 16:43:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:09.960 16:43:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:09.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:09.960 16:43:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:09.960 16:43:47 -- common/autotest_common.sh@10 -- # set +x 00:24:09.960 [2024-11-16 16:43:47.338534] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:09.960 [2024-11-16 16:43:47.338617] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:10.220 [2024-11-16 16:43:47.479862] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:10.220 [2024-11-16 16:43:47.554704] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:10.220 [2024-11-16 16:43:47.554867] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:10.220 [2024-11-16 16:43:47.554883] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:10.220 [2024-11-16 16:43:47.554894] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:10.220 [2024-11-16 16:43:47.555078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:10.220 [2024-11-16 16:43:47.555495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:10.788 16:43:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:10.788 16:43:48 -- common/autotest_common.sh@862 -- # return 0 00:24:10.788 16:43:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:10.788 16:43:48 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:10.788 16:43:48 -- common/autotest_common.sh@10 -- # set +x 00:24:11.047 16:43:48 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:11.047 16:43:48 -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:11.047 16:43:48 -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:11.047 [2024-11-16 16:43:48.476344] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:11.047 16:43:48 -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:11.306 Malloc0 00:24:11.306 16:43:48 -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:11.565 16:43:49 -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:11.824 16:43:49 -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:12.083 [2024-11-16 16:43:49.413391] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:12.083 16:43:49 -- host/timeout.sh@32 -- # bdevperf_pid=100621 00:24:12.083 16:43:49 -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:24:12.083 16:43:49 -- host/timeout.sh@34 -- # waitforlisten 100621 /var/tmp/bdevperf.sock 00:24:12.083 16:43:49 -- common/autotest_common.sh@829 -- # '[' -z 100621 ']' 00:24:12.083 16:43:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:12.083 16:43:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:12.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:12.083 16:43:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:12.083 16:43:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:12.083 16:43:49 -- common/autotest_common.sh@10 -- # set +x 00:24:12.083 [2024-11-16 16:43:49.476238] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:12.083 [2024-11-16 16:43:49.476302] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100621 ] 00:24:12.342 [2024-11-16 16:43:49.603819] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.342 [2024-11-16 16:43:49.662709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:13.280 16:43:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:13.280 16:43:50 -- common/autotest_common.sh@862 -- # return 0 00:24:13.280 16:43:50 -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:13.280 16:43:50 -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:24:13.545 NVMe0n1 00:24:13.545 16:43:50 -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:13.545 16:43:50 -- host/timeout.sh@51 -- # rpc_pid=100663 00:24:13.545 16:43:50 -- host/timeout.sh@53 -- # sleep 1 00:24:13.841 Running I/O for 10 seconds... 00:24:14.814 16:43:51 -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:14.814 [2024-11-16 16:43:52.177965] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1696490 is same with the state(5) to be set 00:24:14.814 [2024-11-16 16:43:52.178031] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1696490 is same with the state(5) to be set 00:24:14.814 [2024-11-16 16:43:52.178051] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1696490 is same with the state(5) to be set 00:24:14.814 [2024-11-16 16:43:52.178086] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1696490 is same with the state(5) to be set 00:24:14.814 [2024-11-16 16:43:52.178095] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1696490 is same with the state(5) to be set 00:24:14.814 [2024-11-16 16:43:52.178103] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1696490 is same with the state(5) to be set 00:24:14.814 [2024-11-16 16:43:52.178111] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1696490 is same with the state(5) to be set 00:24:14.814 [2024-11-16 16:43:52.178118] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1696490 is same with the state(5) to be set 00:24:14.814 [2024-11-16 16:43:52.178125] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1696490 is same with the state(5) to be set 00:24:14.814 [2024-11-16 16:43:52.178132] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1696490 is same with the state(5) to be set 00:24:14.814 [2024-11-16 16:43:52.178139] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1696490 is same with the state(5) to be set 00:24:14.814 [2024-11-16 16:43:52.178147] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1696490 is same with the state(5) to be set 00:24:14.814 [2024-11-16 16:43:52.178154] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1696490 is same with the state(5) to be set 00:24:14.814 [2024-11-16 16:43:52.178161] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1696490 is same with the state(5) to be set 00:24:14.814 [2024-11-16 16:43:52.178168] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1696490 is same with the state(5) to be set 00:24:14.814 [2024-11-16 16:43:52.178175] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1696490 is same with the state(5) to be set 00:24:14.814 [2024-11-16 16:43:52.178182] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1696490 is same with the state(5) to be set 00:24:14.814 [2024-11-16 16:43:52.178191] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1696490 is same with the state(5) to be set 00:24:14.814 [2024-11-16 16:43:52.178198] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1696490 is same with the state(5) to be set 00:24:14.814 [2024-11-16 16:43:52.178206] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1696490 is same with the state(5) to be set 00:24:14.814 [2024-11-16 16:43:52.178213] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1696490 is same with the state(5) to be set 00:24:14.814 [2024-11-16 16:43:52.178220] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1696490 is same with the state(5) to be set 00:24:14.814 [2024-11-16 16:43:52.178226] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1696490 is same with the state(5) to be set 00:24:14.814 [2024-11-16 16:43:52.178233] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1696490 is same with the state(5) to be set 00:24:14.814 [2024-11-16 16:43:52.178241] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1696490 is same with the state(5) to be set 00:24:14.814 [2024-11-16 16:43:52.178248] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1696490 is same with the state(5) to be set 00:24:14.814 [2024-11-16 16:43:52.178255] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1696490 is same with the state(5) to be set 00:24:14.814 [2024-11-16 16:43:52.178262] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1696490 is same with the state(5) to be set 00:24:14.814 [2024-11-16 16:43:52.178271] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1696490 is same with the state(5) to be set 00:24:14.814 [2024-11-16 16:43:52.178277] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1696490 is same with the state(5) to be set 00:24:14.814 [2024-11-16 16:43:52.178284] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1696490 is same with the state(5) to be set 00:24:14.814 [2024-11-16 16:43:52.178291] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1696490 is same with the state(5) to be set 00:24:14.814 [2024-11-16 16:43:52.178298] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1696490 is same with the state(5) to be set 00:24:14.814 [2024-11-16 16:43:52.178305] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1696490 is same with the state(5) to be set 00:24:14.814 [2024-11-16 16:43:52.178885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:10080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.814 [2024-11-16 16:43:52.178921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.815 [2024-11-16 16:43:52.178941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:10088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.815 [2024-11-16 16:43:52.178966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.815 [2024-11-16 16:43:52.178976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:10096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.815 [2024-11-16 16:43:52.178986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.815 [2024-11-16 16:43:52.178995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.815 [2024-11-16 16:43:52.179004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.815 [2024-11-16 16:43:52.179013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:10112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.815 [2024-11-16 16:43:52.179021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.815 [2024-11-16 16:43:52.179031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:10136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.815 [2024-11-16 16:43:52.179055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.815 [2024-11-16 16:43:52.179518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:10152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.815 [2024-11-16 16:43:52.179612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.815 [2024-11-16 16:43:52.179624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:10160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.815 [2024-11-16 16:43:52.179633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.815 [2024-11-16 16:43:52.179645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:10672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.815 [2024-11-16 16:43:52.179653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.815 [2024-11-16 16:43:52.179663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:10704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.815 [2024-11-16 16:43:52.179808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.815 [2024-11-16 16:43:52.179820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:10736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.815 [2024-11-16 16:43:52.179829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.815 [2024-11-16 16:43:52.179947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:10744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.815 [2024-11-16 16:43:52.179961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.815 [2024-11-16 16:43:52.180227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:10752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.815 [2024-11-16 16:43:52.180250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.815 [2024-11-16 16:43:52.180264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:10760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.815 [2024-11-16 16:43:52.180273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.815 [2024-11-16 16:43:52.180285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:10792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.815 [2024-11-16 16:43:52.180294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.815 [2024-11-16 16:43:52.180305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:10808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.815 [2024-11-16 16:43:52.180315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.815 [2024-11-16 16:43:52.180326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:10816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.815 [2024-11-16 16:43:52.180336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.815 [2024-11-16 16:43:52.180347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.815 [2024-11-16 16:43:52.180356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.815 [2024-11-16 16:43:52.180367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:10832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.815 [2024-11-16 16:43:52.180376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.815 [2024-11-16 16:43:52.180656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:10176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.815 [2024-11-16 16:43:52.180718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.815 [2024-11-16 16:43:52.180731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:10184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.815 [2024-11-16 16:43:52.180740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.815 [2024-11-16 16:43:52.180751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.815 [2024-11-16 16:43:52.180759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.815 [2024-11-16 16:43:52.180770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:10224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.815 [2024-11-16 16:43:52.180778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.815 [2024-11-16 16:43:52.180788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.815 [2024-11-16 16:43:52.180796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.815 [2024-11-16 16:43:52.180806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:10240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.815 [2024-11-16 16:43:52.180814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.815 [2024-11-16 16:43:52.180824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:10272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.815 [2024-11-16 16:43:52.180931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.815 [2024-11-16 16:43:52.180945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:10288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.815 [2024-11-16 16:43:52.180954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.815 [2024-11-16 16:43:52.180965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:10848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.815 [2024-11-16 16:43:52.181391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.815 [2024-11-16 16:43:52.181496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:10856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.815 [2024-11-16 16:43:52.181508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.815 [2024-11-16 16:43:52.181518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:10864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.815 [2024-11-16 16:43:52.181527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.815 [2024-11-16 16:43:52.181538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:10872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.815 [2024-11-16 16:43:52.181561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.815 [2024-11-16 16:43:52.181571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:10880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.815 [2024-11-16 16:43:52.181580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.815 [2024-11-16 16:43:52.181589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:10888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.815 [2024-11-16 16:43:52.181599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.815 [2024-11-16 16:43:52.181742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:10896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.815 [2024-11-16 16:43:52.181876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.815 [2024-11-16 16:43:52.181891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:10904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.815 [2024-11-16 16:43:52.182010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.815 [2024-11-16 16:43:52.182025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:10912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.815 [2024-11-16 16:43:52.182034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.815 [2024-11-16 16:43:52.182326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:10920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.815 [2024-11-16 16:43:52.182340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.815 [2024-11-16 16:43:52.182352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:10928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.815 [2024-11-16 16:43:52.182470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.815 [2024-11-16 16:43:52.182499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:10296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.815 [2024-11-16 16:43:52.182508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.815 [2024-11-16 16:43:52.182518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:10312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.815 [2024-11-16 16:43:52.182527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.816 [2024-11-16 16:43:52.182621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.816 [2024-11-16 16:43:52.182632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.816 [2024-11-16 16:43:52.182642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.816 [2024-11-16 16:43:52.182650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.816 [2024-11-16 16:43:52.182661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:10360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.816 [2024-11-16 16:43:52.182669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.816 [2024-11-16 16:43:52.182679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:10384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.816 [2024-11-16 16:43:52.182767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.816 [2024-11-16 16:43:52.182780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.816 [2024-11-16 16:43:52.182789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.816 [2024-11-16 16:43:52.182799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.816 [2024-11-16 16:43:52.182813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.816 [2024-11-16 16:43:52.182824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:10488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.816 [2024-11-16 16:43:52.182833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.816 [2024-11-16 16:43:52.182919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:10504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.816 [2024-11-16 16:43:52.182933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.816 [2024-11-16 16:43:52.182944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:10536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.816 [2024-11-16 16:43:52.182953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.816 [2024-11-16 16:43:52.182963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:10544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.816 [2024-11-16 16:43:52.182972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.816 [2024-11-16 16:43:52.182982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:10552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.816 [2024-11-16 16:43:52.182990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.816 [2024-11-16 16:43:52.183249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.816 [2024-11-16 16:43:52.183262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.816 [2024-11-16 16:43:52.183337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:10600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.816 [2024-11-16 16:43:52.183350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.816 [2024-11-16 16:43:52.183360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:10936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.816 [2024-11-16 16:43:52.183369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.816 [2024-11-16 16:43:52.183379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:10944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.816 [2024-11-16 16:43:52.183388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.816 [2024-11-16 16:43:52.183398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.816 [2024-11-16 16:43:52.183531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.816 [2024-11-16 16:43:52.183658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:10960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.816 [2024-11-16 16:43:52.183669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.816 [2024-11-16 16:43:52.183796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:10968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.816 [2024-11-16 16:43:52.183810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.816 [2024-11-16 16:43:52.183932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:10976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.816 [2024-11-16 16:43:52.183950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.816 [2024-11-16 16:43:52.184102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:10984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.816 [2024-11-16 16:43:52.184196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.816 [2024-11-16 16:43:52.184212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:10992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.816 [2024-11-16 16:43:52.184221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.816 [2024-11-16 16:43:52.184232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:11000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.816 [2024-11-16 16:43:52.184247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.816 [2024-11-16 16:43:52.184257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:11008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.816 [2024-11-16 16:43:52.184342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.816 [2024-11-16 16:43:52.184360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:11016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.816 [2024-11-16 16:43:52.184370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.816 [2024-11-16 16:43:52.184380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:11024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.816 [2024-11-16 16:43:52.184388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.816 [2024-11-16 16:43:52.184398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:11032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.816 [2024-11-16 16:43:52.184407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.816 [2024-11-16 16:43:52.184417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.816 [2024-11-16 16:43:52.184426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.816 [2024-11-16 16:43:52.184523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:11048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.816 [2024-11-16 16:43:52.184535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.816 [2024-11-16 16:43:52.184546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:11056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.816 [2024-11-16 16:43:52.184555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.816 [2024-11-16 16:43:52.184565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:11064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.816 [2024-11-16 16:43:52.184573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.816 [2024-11-16 16:43:52.184584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:11072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.816 [2024-11-16 16:43:52.184592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.816 [2024-11-16 16:43:52.184602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.816 [2024-11-16 16:43:52.184611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.816 [2024-11-16 16:43:52.184621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:11088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.816 [2024-11-16 16:43:52.184630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.816 [2024-11-16 16:43:52.184639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:11096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.816 [2024-11-16 16:43:52.184648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.816 [2024-11-16 16:43:52.184658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:11104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.816 [2024-11-16 16:43:52.184667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.816 [2024-11-16 16:43:52.184677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:11112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.816 [2024-11-16 16:43:52.184685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.816 [2024-11-16 16:43:52.184695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:11120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.816 [2024-11-16 16:43:52.184704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.816 [2024-11-16 16:43:52.184812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:11128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.816 [2024-11-16 16:43:52.184827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.816 [2024-11-16 16:43:52.184838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:11136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.816 [2024-11-16 16:43:52.184846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.816 [2024-11-16 16:43:52.184857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:11144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.816 [2024-11-16 16:43:52.184942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.817 [2024-11-16 16:43:52.184958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:11152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.817 [2024-11-16 16:43:52.184973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.817 [2024-11-16 16:43:52.184983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:11160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.817 [2024-11-16 16:43:52.184993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.817 [2024-11-16 16:43:52.185003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:11168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.817 [2024-11-16 16:43:52.185011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.817 [2024-11-16 16:43:52.185021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:11176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.817 [2024-11-16 16:43:52.185129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.817 [2024-11-16 16:43:52.185146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:11184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.817 [2024-11-16 16:43:52.185155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.817 [2024-11-16 16:43:52.185165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.817 [2024-11-16 16:43:52.185491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.817 [2024-11-16 16:43:52.185692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:11200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.817 [2024-11-16 16:43:52.185704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.817 [2024-11-16 16:43:52.185714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.817 [2024-11-16 16:43:52.185723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.817 [2024-11-16 16:43:52.185733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:11216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.817 [2024-11-16 16:43:52.185742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.817 [2024-11-16 16:43:52.185751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:11224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.817 [2024-11-16 16:43:52.185760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.817 [2024-11-16 16:43:52.185770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:11232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.817 [2024-11-16 16:43:52.185892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.817 [2024-11-16 16:43:52.185912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.817 [2024-11-16 16:43:52.185922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.817 [2024-11-16 16:43:52.186187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:11248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.817 [2024-11-16 16:43:52.186200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.817 [2024-11-16 16:43:52.186211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:11256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.817 [2024-11-16 16:43:52.186321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.817 [2024-11-16 16:43:52.186337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:11264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.817 [2024-11-16 16:43:52.186346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.817 [2024-11-16 16:43:52.186356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:11272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.817 [2024-11-16 16:43:52.186365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.817 [2024-11-16 16:43:52.186483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:11280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.817 [2024-11-16 16:43:52.186498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.817 [2024-11-16 16:43:52.186509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:11288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.817 [2024-11-16 16:43:52.186617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.817 [2024-11-16 16:43:52.186630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:11296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.817 [2024-11-16 16:43:52.186640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.817 [2024-11-16 16:43:52.186650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:11304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.817 [2024-11-16 16:43:52.186797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.817 [2024-11-16 16:43:52.186943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.817 [2024-11-16 16:43:52.187052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.817 [2024-11-16 16:43:52.187083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:11320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.817 [2024-11-16 16:43:52.187093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.817 [2024-11-16 16:43:52.187103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:11328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.817 [2024-11-16 16:43:52.187355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.817 [2024-11-16 16:43:52.187378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:11336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.817 [2024-11-16 16:43:52.187388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.817 [2024-11-16 16:43:52.187398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:11344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.817 [2024-11-16 16:43:52.187407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.817 [2024-11-16 16:43:52.187418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:11352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.817 [2024-11-16 16:43:52.187426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.817 [2024-11-16 16:43:52.187436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:11360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.817 [2024-11-16 16:43:52.187446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.817 [2024-11-16 16:43:52.187552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:11368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.817 [2024-11-16 16:43:52.187567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.817 [2024-11-16 16:43:52.187578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.817 [2024-11-16 16:43:52.187586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.817 [2024-11-16 16:43:52.187596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:11384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:14.817 [2024-11-16 16:43:52.187808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.817 [2024-11-16 16:43:52.187831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:11392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.817 [2024-11-16 16:43:52.187841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.817 [2024-11-16 16:43:52.187851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:11400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.817 [2024-11-16 16:43:52.187860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.817 [2024-11-16 16:43:52.187870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:10632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.817 [2024-11-16 16:43:52.187878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.817 [2024-11-16 16:43:52.187888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:10640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.817 [2024-11-16 16:43:52.187896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.817 [2024-11-16 16:43:52.188027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.817 [2024-11-16 16:43:52.188045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.817 [2024-11-16 16:43:52.188140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:10656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.817 [2024-11-16 16:43:52.188156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.817 [2024-11-16 16:43:52.188166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.817 [2024-11-16 16:43:52.188175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.817 [2024-11-16 16:43:52.188185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:10680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.817 [2024-11-16 16:43:52.188194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.817 [2024-11-16 16:43:52.188204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:10688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.817 [2024-11-16 16:43:52.188212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.817 [2024-11-16 16:43:52.188356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:10696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.818 [2024-11-16 16:43:52.188482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.818 [2024-11-16 16:43:52.188497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.818 [2024-11-16 16:43:52.188507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.818 [2024-11-16 16:43:52.188763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:10720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.818 [2024-11-16 16:43:52.188774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.818 [2024-11-16 16:43:52.188784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:10728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.818 [2024-11-16 16:43:52.188794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.818 [2024-11-16 16:43:52.188804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:10768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.818 [2024-11-16 16:43:52.188813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.818 [2024-11-16 16:43:52.188955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:10776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.818 [2024-11-16 16:43:52.189085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.818 [2024-11-16 16:43:52.189100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:10784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.818 [2024-11-16 16:43:52.189347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.818 [2024-11-16 16:43:52.189368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.818 [2024-11-16 16:43:52.189379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.818 [2024-11-16 16:43:52.189389] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb69780 is same with the state(5) to be set 00:24:14.818 [2024-11-16 16:43:52.189401] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:14.818 [2024-11-16 16:43:52.189408] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:14.818 [2024-11-16 16:43:52.189416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10840 len:8 PRP1 0x0 PRP2 0x0 00:24:14.818 [2024-11-16 16:43:52.189424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.818 [2024-11-16 16:43:52.189712] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xb69780 was disconnected and freed. reset controller. 00:24:14.818 [2024-11-16 16:43:52.190012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:14.818 [2024-11-16 16:43:52.190038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.818 [2024-11-16 16:43:52.190049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:14.818 [2024-11-16 16:43:52.190070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.818 [2024-11-16 16:43:52.190081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:14.818 [2024-11-16 16:43:52.190089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.818 [2024-11-16 16:43:52.190098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:14.818 [2024-11-16 16:43:52.190106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:14.818 [2024-11-16 16:43:52.190114] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae48c0 is same with the state(5) to be set 00:24:14.818 [2024-11-16 16:43:52.190492] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:14.818 [2024-11-16 16:43:52.190524] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xae48c0 (9): Bad file descriptor 00:24:14.818 [2024-11-16 16:43:52.190830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.818 [2024-11-16 16:43:52.190890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:14.818 [2024-11-16 16:43:52.190906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae48c0 with addr=10.0.0.2, port=4420 00:24:14.818 [2024-11-16 16:43:52.191029] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae48c0 is same with the state(5) to be set 00:24:14.818 [2024-11-16 16:43:52.191144] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xae48c0 (9): Bad file descriptor 00:24:14.818 [2024-11-16 16:43:52.191170] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:14.818 [2024-11-16 16:43:52.191180] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:14.818 [2024-11-16 16:43:52.191189] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:14.818 [2024-11-16 16:43:52.191450] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:14.818 [2024-11-16 16:43:52.191475] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:14.818 16:43:52 -- host/timeout.sh@56 -- # sleep 2 00:24:16.721 [2024-11-16 16:43:54.191539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.721 [2024-11-16 16:43:54.191619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.721 [2024-11-16 16:43:54.191636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae48c0 with addr=10.0.0.2, port=4420 00:24:16.721 [2024-11-16 16:43:54.191646] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae48c0 is same with the state(5) to be set 00:24:16.721 [2024-11-16 16:43:54.191664] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xae48c0 (9): Bad file descriptor 00:24:16.721 [2024-11-16 16:43:54.191678] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:16.721 [2024-11-16 16:43:54.191687] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:16.721 [2024-11-16 16:43:54.191695] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:16.721 [2024-11-16 16:43:54.191712] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.721 [2024-11-16 16:43:54.191721] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.721 16:43:54 -- host/timeout.sh@57 -- # get_controller 00:24:16.721 16:43:54 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:16.721 16:43:54 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:24:16.980 16:43:54 -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:24:16.980 16:43:54 -- host/timeout.sh@58 -- # get_bdev 00:24:16.980 16:43:54 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:24:16.980 16:43:54 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:24:17.238 16:43:54 -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:24:17.238 16:43:54 -- host/timeout.sh@61 -- # sleep 5 00:24:19.142 [2024-11-16 16:43:56.191792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.142 [2024-11-16 16:43:56.191874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.143 [2024-11-16 16:43:56.191891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae48c0 with addr=10.0.0.2, port=4420 00:24:19.143 [2024-11-16 16:43:56.191902] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae48c0 is same with the state(5) to be set 00:24:19.143 [2024-11-16 16:43:56.191920] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xae48c0 (9): Bad file descriptor 00:24:19.143 [2024-11-16 16:43:56.191935] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:19.143 [2024-11-16 16:43:56.191942] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:19.143 [2024-11-16 16:43:56.191951] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:19.143 [2024-11-16 16:43:56.191973] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:19.143 [2024-11-16 16:43:56.191982] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:21.077 [2024-11-16 16:43:58.191997] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:21.077 [2024-11-16 16:43:58.192028] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:21.077 [2024-11-16 16:43:58.192054] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:21.077 [2024-11-16 16:43:58.192062] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:24:21.077 [2024-11-16 16:43:58.192089] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:22.012 00:24:22.012 Latency(us) 00:24:22.012 [2024-11-16T16:43:59.503Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:22.012 [2024-11-16T16:43:59.503Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:22.012 Verification LBA range: start 0x0 length 0x4000 00:24:22.012 NVMe0n1 : 8.13 2175.98 8.50 15.75 0.00 58359.00 2293.76 7046430.72 00:24:22.012 [2024-11-16T16:43:59.503Z] =================================================================================================================== 00:24:22.012 [2024-11-16T16:43:59.503Z] Total : 2175.98 8.50 15.75 0.00 58359.00 2293.76 7046430.72 00:24:22.012 0 00:24:22.270 16:43:59 -- host/timeout.sh@62 -- # get_controller 00:24:22.270 16:43:59 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:22.270 16:43:59 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:24:22.528 16:43:59 -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:24:22.528 16:43:59 -- host/timeout.sh@63 -- # get_bdev 00:24:22.528 16:43:59 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:24:22.528 16:43:59 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:24:22.786 16:44:00 -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:24:22.786 16:44:00 -- host/timeout.sh@65 -- # wait 100663 00:24:22.786 16:44:00 -- host/timeout.sh@67 -- # killprocess 100621 00:24:22.786 16:44:00 -- common/autotest_common.sh@936 -- # '[' -z 100621 ']' 00:24:22.786 16:44:00 -- common/autotest_common.sh@940 -- # kill -0 100621 00:24:22.786 16:44:00 -- common/autotest_common.sh@941 -- # uname 00:24:22.786 16:44:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:22.786 16:44:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 100621 00:24:22.786 16:44:00 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:24:22.786 16:44:00 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:24:22.786 killing process with pid 100621 00:24:22.786 16:44:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 100621' 00:24:22.786 16:44:00 -- common/autotest_common.sh@955 -- # kill 100621 00:24:22.786 Received shutdown signal, test time was about 9.186708 seconds 00:24:22.786 00:24:22.786 Latency(us) 00:24:22.786 [2024-11-16T16:44:00.277Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:22.786 [2024-11-16T16:44:00.277Z] =================================================================================================================== 00:24:22.786 [2024-11-16T16:44:00.278Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:22.787 16:44:00 -- common/autotest_common.sh@960 -- # wait 100621 00:24:23.045 16:44:00 -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:23.304 [2024-11-16 16:44:00.693511] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:23.304 16:44:00 -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:24:23.304 16:44:00 -- host/timeout.sh@74 -- # bdevperf_pid=100822 00:24:23.304 16:44:00 -- host/timeout.sh@76 -- # waitforlisten 100822 /var/tmp/bdevperf.sock 00:24:23.304 16:44:00 -- common/autotest_common.sh@829 -- # '[' -z 100822 ']' 00:24:23.304 16:44:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:23.304 16:44:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:23.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:23.304 16:44:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:23.304 16:44:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:23.304 16:44:00 -- common/autotest_common.sh@10 -- # set +x 00:24:23.304 [2024-11-16 16:44:00.746889] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:23.304 [2024-11-16 16:44:00.746986] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100822 ] 00:24:23.564 [2024-11-16 16:44:00.871285] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:23.564 [2024-11-16 16:44:00.933230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:24.499 16:44:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:24.499 16:44:01 -- common/autotest_common.sh@862 -- # return 0 00:24:24.499 16:44:01 -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:24.499 16:44:01 -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:24:24.757 NVMe0n1 00:24:24.757 16:44:02 -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:24.757 16:44:02 -- host/timeout.sh@84 -- # rpc_pid=100864 00:24:24.757 16:44:02 -- host/timeout.sh@86 -- # sleep 1 00:24:25.016 Running I/O for 10 seconds... 00:24:25.954 16:44:03 -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:25.954 [2024-11-16 16:44:03.386200] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183bca0 is same with the state(5) to be set 00:24:25.954 [2024-11-16 16:44:03.386274] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183bca0 is same with the state(5) to be set 00:24:25.954 [2024-11-16 16:44:03.386286] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183bca0 is same with the state(5) to be set 00:24:25.954 [2024-11-16 16:44:03.386295] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183bca0 is same with the state(5) to be set 00:24:25.954 [2024-11-16 16:44:03.386304] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183bca0 is same with the state(5) to be set 00:24:25.954 [2024-11-16 16:44:03.386313] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183bca0 is same with the state(5) to be set 00:24:25.954 [2024-11-16 16:44:03.386323] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183bca0 is same with the state(5) to be set 00:24:25.954 [2024-11-16 16:44:03.386331] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183bca0 is same with the state(5) to be set 00:24:25.954 [2024-11-16 16:44:03.386339] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183bca0 is same with the state(5) to be set 00:24:25.954 [2024-11-16 16:44:03.386348] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183bca0 is same with the state(5) to be set 00:24:25.954 [2024-11-16 16:44:03.386356] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183bca0 is same with the state(5) to be set 00:24:25.954 [2024-11-16 16:44:03.386364] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183bca0 is same with the state(5) to be set 00:24:25.954 [2024-11-16 16:44:03.386372] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183bca0 is same with the state(5) to be set 00:24:25.954 [2024-11-16 16:44:03.386380] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183bca0 is same with the state(5) to be set 00:24:25.954 [2024-11-16 16:44:03.386388] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183bca0 is same with the state(5) to be set 00:24:25.954 [2024-11-16 16:44:03.386396] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183bca0 is same with the state(5) to be set 00:24:25.954 [2024-11-16 16:44:03.386404] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183bca0 is same with the state(5) to be set 00:24:25.954 [2024-11-16 16:44:03.386412] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183bca0 is same with the state(5) to be set 00:24:25.954 [2024-11-16 16:44:03.386419] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183bca0 is same with the state(5) to be set 00:24:25.954 [2024-11-16 16:44:03.386442] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183bca0 is same with the state(5) to be set 00:24:25.954 [2024-11-16 16:44:03.386459] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183bca0 is same with the state(5) to be set 00:24:25.954 [2024-11-16 16:44:03.386467] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183bca0 is same with the state(5) to be set 00:24:25.954 [2024-11-16 16:44:03.386478] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183bca0 is same with the state(5) to be set 00:24:25.954 [2024-11-16 16:44:03.386485] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183bca0 is same with the state(5) to be set 00:24:25.954 [2024-11-16 16:44:03.386502] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183bca0 is same with the state(5) to be set 00:24:25.954 [2024-11-16 16:44:03.386509] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183bca0 is same with the state(5) to be set 00:24:25.954 [2024-11-16 16:44:03.386516] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183bca0 is same with the state(5) to be set 00:24:25.954 [2024-11-16 16:44:03.386522] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183bca0 is same with the state(5) to be set 00:24:25.954 [2024-11-16 16:44:03.386529] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183bca0 is same with the state(5) to be set 00:24:25.954 [2024-11-16 16:44:03.386537] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183bca0 is same with the state(5) to be set 00:24:25.954 [2024-11-16 16:44:03.386543] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183bca0 is same with the state(5) to be set 00:24:25.954 [2024-11-16 16:44:03.386550] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183bca0 is same with the state(5) to be set 00:24:25.954 [2024-11-16 16:44:03.386557] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183bca0 is same with the state(5) to be set 00:24:25.954 [2024-11-16 16:44:03.386565] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183bca0 is same with the state(5) to be set 00:24:25.954 [2024-11-16 16:44:03.386577] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183bca0 is same with the state(5) to be set 00:24:25.954 [2024-11-16 16:44:03.386584] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183bca0 is same with the state(5) to be set 00:24:25.954 [2024-11-16 16:44:03.386592] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183bca0 is same with the state(5) to be set 00:24:25.955 [2024-11-16 16:44:03.386600] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183bca0 is same with the state(5) to be set 00:24:25.955 [2024-11-16 16:44:03.386608] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183bca0 is same with the state(5) to be set 00:24:25.955 [2024-11-16 16:44:03.386617] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183bca0 is same with the state(5) to be set 00:24:25.955 [2024-11-16 16:44:03.386624] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183bca0 is same with the state(5) to be set 00:24:25.955 [2024-11-16 16:44:03.386632] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183bca0 is same with the state(5) to be set 00:24:25.955 [2024-11-16 16:44:03.386640] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183bca0 is same with the state(5) to be set 00:24:25.955 [2024-11-16 16:44:03.386647] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183bca0 is same with the state(5) to be set 00:24:25.955 [2024-11-16 16:44:03.386655] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183bca0 is same with the state(5) to be set 00:24:25.955 [2024-11-16 16:44:03.386664] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183bca0 is same with the state(5) to be set 00:24:25.955 [2024-11-16 16:44:03.386670] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183bca0 is same with the state(5) to be set 00:24:25.955 [2024-11-16 16:44:03.386678] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183bca0 is same with the state(5) to be set 00:24:25.955 [2024-11-16 16:44:03.386685] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183bca0 is same with the state(5) to be set 00:24:25.955 [2024-11-16 16:44:03.386692] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183bca0 is same with the state(5) to be set 00:24:25.955 [2024-11-16 16:44:03.386700] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183bca0 is same with the state(5) to be set 00:24:25.955 [2024-11-16 16:44:03.386708] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183bca0 is same with the state(5) to be set 00:24:25.955 [2024-11-16 16:44:03.386716] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183bca0 is same with the state(5) to be set 00:24:25.955 [2024-11-16 16:44:03.386723] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183bca0 is same with the state(5) to be set 00:24:25.955 [2024-11-16 16:44:03.386731] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183bca0 is same with the state(5) to be set 00:24:25.955 [2024-11-16 16:44:03.386738] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183bca0 is same with the state(5) to be set 00:24:25.955 [2024-11-16 16:44:03.386745] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183bca0 is same with the state(5) to be set 00:24:25.955 [2024-11-16 16:44:03.386752] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183bca0 is same with the state(5) to be set 00:24:25.955 [2024-11-16 16:44:03.386759] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183bca0 is same with the state(5) to be set 00:24:25.955 [2024-11-16 16:44:03.386766] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183bca0 is same with the state(5) to be set 00:24:25.955 [2024-11-16 16:44:03.386773] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183bca0 is same with the state(5) to be set 00:24:25.955 [2024-11-16 16:44:03.386782] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183bca0 is same with the state(5) to be set 00:24:25.955 [2024-11-16 16:44:03.386795] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183bca0 is same with the state(5) to be set 00:24:25.955 [2024-11-16 16:44:03.386802] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183bca0 is same with the state(5) to be set 00:24:25.955 [2024-11-16 16:44:03.386809] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183bca0 is same with the state(5) to be set 00:24:25.955 [2024-11-16 16:44:03.386817] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183bca0 is same with the state(5) to be set 00:24:25.955 [2024-11-16 16:44:03.386825] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183bca0 is same with the state(5) to be set 00:24:25.955 [2024-11-16 16:44:03.386832] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183bca0 is same with the state(5) to be set 00:24:25.955 [2024-11-16 16:44:03.386840] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183bca0 is same with the state(5) to be set 00:24:25.955 [2024-11-16 16:44:03.386847] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183bca0 is same with the state(5) to be set 00:24:25.955 [2024-11-16 16:44:03.386855] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183bca0 is same with the state(5) to be set 00:24:25.955 [2024-11-16 16:44:03.386862] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183bca0 is same with the state(5) to be set 00:24:25.955 [2024-11-16 16:44:03.386869] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183bca0 is same with the state(5) to be set 00:24:25.955 [2024-11-16 16:44:03.386876] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183bca0 is same with the state(5) to be set 00:24:25.955 [2024-11-16 16:44:03.386883] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183bca0 is same with the state(5) to be set 00:24:25.955 [2024-11-16 16:44:03.386890] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183bca0 is same with the state(5) to be set 00:24:25.955 [2024-11-16 16:44:03.386897] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183bca0 is same with the state(5) to be set 00:24:25.955 [2024-11-16 16:44:03.386904] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183bca0 is same with the state(5) to be set 00:24:25.955 [2024-11-16 16:44:03.386911] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183bca0 is same with the state(5) to be set 00:24:25.955 [2024-11-16 16:44:03.387395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.955 [2024-11-16 16:44:03.387538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.955 [2024-11-16 16:44:03.387562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.955 [2024-11-16 16:44:03.387572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.955 [2024-11-16 16:44:03.387597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:2072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.955 [2024-11-16 16:44:03.387818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.955 [2024-11-16 16:44:03.387842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:2080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.955 [2024-11-16 16:44:03.387851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.955 [2024-11-16 16:44:03.387862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.955 [2024-11-16 16:44:03.387870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.955 [2024-11-16 16:44:03.387881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:2104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.955 [2024-11-16 16:44:03.387889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.955 [2024-11-16 16:44:03.387900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.955 [2024-11-16 16:44:03.387908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.955 [2024-11-16 16:44:03.387918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.955 [2024-11-16 16:44:03.387926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.955 [2024-11-16 16:44:03.387936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:2152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.955 [2024-11-16 16:44:03.387944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.955 [2024-11-16 16:44:03.387953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:2176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.955 [2024-11-16 16:44:03.387961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.955 [2024-11-16 16:44:03.388107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:2192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.955 [2024-11-16 16:44:03.388581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.955 [2024-11-16 16:44:03.388606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:2200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.955 [2024-11-16 16:44:03.388615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.955 [2024-11-16 16:44:03.388625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:2232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.955 [2024-11-16 16:44:03.388634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.955 [2024-11-16 16:44:03.388644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.955 [2024-11-16 16:44:03.388653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.955 [2024-11-16 16:44:03.388664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:2840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.955 [2024-11-16 16:44:03.388672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.955 [2024-11-16 16:44:03.388682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.955 [2024-11-16 16:44:03.388690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.955 [2024-11-16 16:44:03.388700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.955 [2024-11-16 16:44:03.388708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.955 [2024-11-16 16:44:03.388825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.955 [2024-11-16 16:44:03.388839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.955 [2024-11-16 16:44:03.388948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:2272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.955 [2024-11-16 16:44:03.388961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.955 [2024-11-16 16:44:03.388971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:2288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.955 [2024-11-16 16:44:03.388980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.956 [2024-11-16 16:44:03.388990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.956 [2024-11-16 16:44:03.388998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.956 [2024-11-16 16:44:03.389007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:2328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.956 [2024-11-16 16:44:03.389015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.956 [2024-11-16 16:44:03.389207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:2336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.956 [2024-11-16 16:44:03.389345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.956 [2024-11-16 16:44:03.389596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:2344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.956 [2024-11-16 16:44:03.389620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.956 [2024-11-16 16:44:03.389632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.956 [2024-11-16 16:44:03.389640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.956 [2024-11-16 16:44:03.389650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:2888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.956 [2024-11-16 16:44:03.389658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.956 [2024-11-16 16:44:03.389667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:2912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.956 [2024-11-16 16:44:03.389675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.956 [2024-11-16 16:44:03.389685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:2928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.956 [2024-11-16 16:44:03.389692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.956 [2024-11-16 16:44:03.389702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:2952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.956 [2024-11-16 16:44:03.389710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.956 [2024-11-16 16:44:03.389843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:2976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.956 [2024-11-16 16:44:03.389858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.956 [2024-11-16 16:44:03.389957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:2992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.956 [2024-11-16 16:44:03.389969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.956 [2024-11-16 16:44:03.389979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:3000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.956 [2024-11-16 16:44:03.389987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.956 [2024-11-16 16:44:03.389997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:3024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.956 [2024-11-16 16:44:03.390005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.956 [2024-11-16 16:44:03.390140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.956 [2024-11-16 16:44:03.390285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.956 [2024-11-16 16:44:03.390371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.956 [2024-11-16 16:44:03.390381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.956 [2024-11-16 16:44:03.390392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:2360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.956 [2024-11-16 16:44:03.390401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.956 [2024-11-16 16:44:03.390427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:2368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.956 [2024-11-16 16:44:03.390436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.956 [2024-11-16 16:44:03.390446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:2384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.956 [2024-11-16 16:44:03.390560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.956 [2024-11-16 16:44:03.390572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:2392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.956 [2024-11-16 16:44:03.390581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.956 [2024-11-16 16:44:03.390697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:2400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.956 [2024-11-16 16:44:03.390719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.956 [2024-11-16 16:44:03.390830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:2432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.956 [2024-11-16 16:44:03.390852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.956 [2024-11-16 16:44:03.390863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:2440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.956 [2024-11-16 16:44:03.390872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.956 [2024-11-16 16:44:03.390976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:2480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.956 [2024-11-16 16:44:03.390987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.956 [2024-11-16 16:44:03.390997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:2496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.956 [2024-11-16 16:44:03.391005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.956 [2024-11-16 16:44:03.391150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:2504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.956 [2024-11-16 16:44:03.391165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.956 [2024-11-16 16:44:03.391290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:2512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.956 [2024-11-16 16:44:03.391316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.956 [2024-11-16 16:44:03.391454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:2520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.956 [2024-11-16 16:44:03.391549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.956 [2024-11-16 16:44:03.391563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:2552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.956 [2024-11-16 16:44:03.391572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.956 [2024-11-16 16:44:03.391582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:2568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.956 [2024-11-16 16:44:03.391590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.956 [2024-11-16 16:44:03.391600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:2592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.956 [2024-11-16 16:44:03.391607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.956 [2024-11-16 16:44:03.391617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:2600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.956 [2024-11-16 16:44:03.391625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.956 [2024-11-16 16:44:03.391748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:2616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.956 [2024-11-16 16:44:03.391764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.956 [2024-11-16 16:44:03.391884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.956 [2024-11-16 16:44:03.391903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.956 [2024-11-16 16:44:03.392037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.956 [2024-11-16 16:44:03.392169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.956 [2024-11-16 16:44:03.392191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:2648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.956 [2024-11-16 16:44:03.392201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.956 [2024-11-16 16:44:03.392339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.956 [2024-11-16 16:44:03.392457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.956 [2024-11-16 16:44:03.392471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:2680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.956 [2024-11-16 16:44:03.392487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.956 [2024-11-16 16:44:03.392497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:2688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.956 [2024-11-16 16:44:03.392750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.956 [2024-11-16 16:44:03.392773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:2704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.956 [2024-11-16 16:44:03.392781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.956 [2024-11-16 16:44:03.392914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:3072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.957 [2024-11-16 16:44:03.392995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.957 [2024-11-16 16:44:03.393006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:3080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.957 [2024-11-16 16:44:03.393015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.957 [2024-11-16 16:44:03.393026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:3088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.957 [2024-11-16 16:44:03.393039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.957 [2024-11-16 16:44:03.393050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:3096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.957 [2024-11-16 16:44:03.393221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.957 [2024-11-16 16:44:03.393356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.957 [2024-11-16 16:44:03.393376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.957 [2024-11-16 16:44:03.393645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:3112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.957 [2024-11-16 16:44:03.393663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.957 [2024-11-16 16:44:03.393788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:3120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.957 [2024-11-16 16:44:03.393799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.957 [2024-11-16 16:44:03.393809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:3128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.957 [2024-11-16 16:44:03.394032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.957 [2024-11-16 16:44:03.394046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:3136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.957 [2024-11-16 16:44:03.394067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.957 [2024-11-16 16:44:03.394079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:3144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.957 [2024-11-16 16:44:03.394089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.957 [2024-11-16 16:44:03.394099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:3152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.957 [2024-11-16 16:44:03.394107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.957 [2024-11-16 16:44:03.394117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:3160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.957 [2024-11-16 16:44:03.394125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.957 [2024-11-16 16:44:03.394377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:3168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.957 [2024-11-16 16:44:03.394395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.957 [2024-11-16 16:44:03.394406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:3176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.957 [2024-11-16 16:44:03.394414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.957 [2024-11-16 16:44:03.394424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:3184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.957 [2024-11-16 16:44:03.394433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.957 [2024-11-16 16:44:03.394443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:3192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.957 [2024-11-16 16:44:03.394451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.957 [2024-11-16 16:44:03.394461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:3200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.957 [2024-11-16 16:44:03.394568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.957 [2024-11-16 16:44:03.394586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:3208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.957 [2024-11-16 16:44:03.394596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.957 [2024-11-16 16:44:03.394730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:3216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.957 [2024-11-16 16:44:03.394874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.957 [2024-11-16 16:44:03.395011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:3224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.957 [2024-11-16 16:44:03.395031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.957 [2024-11-16 16:44:03.395271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:3232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.957 [2024-11-16 16:44:03.395291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.957 [2024-11-16 16:44:03.395303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.957 [2024-11-16 16:44:03.395311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.957 [2024-11-16 16:44:03.395322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:3248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.957 [2024-11-16 16:44:03.395330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.957 [2024-11-16 16:44:03.395340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:3256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.957 [2024-11-16 16:44:03.395349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.957 [2024-11-16 16:44:03.395359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:3264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.957 [2024-11-16 16:44:03.395478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.957 [2024-11-16 16:44:03.395492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:3272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.957 [2024-11-16 16:44:03.395627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.957 [2024-11-16 16:44:03.395753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:3280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.957 [2024-11-16 16:44:03.395773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.957 [2024-11-16 16:44:03.395784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:3288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.957 [2024-11-16 16:44:03.396044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.957 [2024-11-16 16:44:03.396184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:3296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.957 [2024-11-16 16:44:03.396202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.957 [2024-11-16 16:44:03.396214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:3304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.957 [2024-11-16 16:44:03.396346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.957 [2024-11-16 16:44:03.396450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:3312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.957 [2024-11-16 16:44:03.396463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.957 [2024-11-16 16:44:03.396473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:3320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.957 [2024-11-16 16:44:03.396481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.957 [2024-11-16 16:44:03.396490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:3328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.957 [2024-11-16 16:44:03.396499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.957 [2024-11-16 16:44:03.396509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:3336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.957 [2024-11-16 16:44:03.396517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.957 [2024-11-16 16:44:03.396744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:3344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.957 [2024-11-16 16:44:03.396762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.957 [2024-11-16 16:44:03.396774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:2712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.957 [2024-11-16 16:44:03.396782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.957 [2024-11-16 16:44:03.396793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:2744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.957 [2024-11-16 16:44:03.396801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.957 [2024-11-16 16:44:03.396812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.957 [2024-11-16 16:44:03.396820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.957 [2024-11-16 16:44:03.396829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:2776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.957 [2024-11-16 16:44:03.396940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.957 [2024-11-16 16:44:03.396954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.957 [2024-11-16 16:44:03.396963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.958 [2024-11-16 16:44:03.397117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:2808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.958 [2024-11-16 16:44:03.397248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.958 [2024-11-16 16:44:03.397393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:2816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.958 [2024-11-16 16:44:03.397531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.958 [2024-11-16 16:44:03.397549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:2824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.958 [2024-11-16 16:44:03.397779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.958 [2024-11-16 16:44:03.397800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:2848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.958 [2024-11-16 16:44:03.397809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.958 [2024-11-16 16:44:03.397819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:2864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.958 [2024-11-16 16:44:03.397828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.958 [2024-11-16 16:44:03.397838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:2880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.958 [2024-11-16 16:44:03.398095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.958 [2024-11-16 16:44:03.398110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:2896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.958 [2024-11-16 16:44:03.398120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.958 [2024-11-16 16:44:03.398130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:2904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.958 [2024-11-16 16:44:03.398139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.958 [2024-11-16 16:44:03.398149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:2920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.958 [2024-11-16 16:44:03.398157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.958 [2024-11-16 16:44:03.398167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.958 [2024-11-16 16:44:03.398277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.958 [2024-11-16 16:44:03.398291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:2944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.958 [2024-11-16 16:44:03.398300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.958 [2024-11-16 16:44:03.398437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:3352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.958 [2024-11-16 16:44:03.398574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.958 [2024-11-16 16:44:03.398597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:3360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.958 [2024-11-16 16:44:03.398731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.958 [2024-11-16 16:44:03.398744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:3368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.958 [2024-11-16 16:44:03.399001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.958 [2024-11-16 16:44:03.399021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.958 [2024-11-16 16:44:03.399129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.958 [2024-11-16 16:44:03.399150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:3384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.958 [2024-11-16 16:44:03.399160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.958 [2024-11-16 16:44:03.399261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:3392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.958 [2024-11-16 16:44:03.399277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.958 [2024-11-16 16:44:03.399288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:3400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.958 [2024-11-16 16:44:03.399297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.958 [2024-11-16 16:44:03.399395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:3408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.958 [2024-11-16 16:44:03.399413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.958 [2024-11-16 16:44:03.399424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:3416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.958 [2024-11-16 16:44:03.399433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.958 [2024-11-16 16:44:03.399657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:3424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.958 [2024-11-16 16:44:03.399677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.958 [2024-11-16 16:44:03.399688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.958 [2024-11-16 16:44:03.399697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.958 [2024-11-16 16:44:03.399708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:2968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.958 [2024-11-16 16:44:03.399716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.958 [2024-11-16 16:44:03.399726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:2984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.958 [2024-11-16 16:44:03.399734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.958 [2024-11-16 16:44:03.399744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.958 [2024-11-16 16:44:03.399752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.958 [2024-11-16 16:44:03.399870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.958 [2024-11-16 16:44:03.399882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.958 [2024-11-16 16:44:03.399999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:3032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.958 [2024-11-16 16:44:03.400016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.958 [2024-11-16 16:44:03.400142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:3056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.958 [2024-11-16 16:44:03.400163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.958 [2024-11-16 16:44:03.400286] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11da660 is same with the state(5) to be set 00:24:25.958 [2024-11-16 16:44:03.400301] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:25.958 [2024-11-16 16:44:03.400415] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:25.958 [2024-11-16 16:44:03.400426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3064 len:8 PRP1 0x0 PRP2 0x0 00:24:25.958 [2024-11-16 16:44:03.400435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.958 [2024-11-16 16:44:03.400589] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x11da660 was disconnected and freed. reset controller. 00:24:25.958 [2024-11-16 16:44:03.400867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:25.958 [2024-11-16 16:44:03.400889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.958 [2024-11-16 16:44:03.400900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:25.958 [2024-11-16 16:44:03.400908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.958 [2024-11-16 16:44:03.400917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:25.958 [2024-11-16 16:44:03.400925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.958 [2024-11-16 16:44:03.400934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:25.958 [2024-11-16 16:44:03.400941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.958 [2024-11-16 16:44:03.400949] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11558c0 is same with the state(5) to be set 00:24:25.958 [2024-11-16 16:44:03.401415] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.958 [2024-11-16 16:44:03.401465] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11558c0 (9): Bad file descriptor 00:24:25.958 [2024-11-16 16:44:03.401799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.958 [2024-11-16 16:44:03.401858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.958 [2024-11-16 16:44:03.401874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11558c0 with addr=10.0.0.2, port=4420 00:24:25.959 [2024-11-16 16:44:03.401884] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11558c0 is same with the state(5) to be set 00:24:25.959 [2024-11-16 16:44:03.402115] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11558c0 (9): Bad file descriptor 00:24:25.959 [2024-11-16 16:44:03.402151] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:25.959 [2024-11-16 16:44:03.402242] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:25.959 [2024-11-16 16:44:03.402263] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.959 [2024-11-16 16:44:03.402285] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.959 [2024-11-16 16:44:03.402295] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.959 16:44:03 -- host/timeout.sh@90 -- # sleep 1 00:24:27.336 [2024-11-16 16:44:04.402569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.336 [2024-11-16 16:44:04.402650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.336 [2024-11-16 16:44:04.402666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11558c0 with addr=10.0.0.2, port=4420 00:24:27.336 [2024-11-16 16:44:04.402675] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11558c0 is same with the state(5) to be set 00:24:27.336 [2024-11-16 16:44:04.402693] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11558c0 (9): Bad file descriptor 00:24:27.336 [2024-11-16 16:44:04.402707] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.336 [2024-11-16 16:44:04.402716] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.336 [2024-11-16 16:44:04.402724] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.336 [2024-11-16 16:44:04.402742] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.336 [2024-11-16 16:44:04.402750] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:27.336 16:44:04 -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:27.336 [2024-11-16 16:44:04.604007] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:27.336 16:44:04 -- host/timeout.sh@92 -- # wait 100864 00:24:28.271 [2024-11-16 16:44:05.415209] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:34.835 00:24:34.835 Latency(us) 00:24:34.835 [2024-11-16T16:44:12.326Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:34.835 [2024-11-16T16:44:12.326Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:34.835 Verification LBA range: start 0x0 length 0x4000 00:24:34.835 NVMe0n1 : 10.01 10981.29 42.90 0.00 0.00 11642.26 1295.83 3035150.89 00:24:34.835 [2024-11-16T16:44:12.326Z] =================================================================================================================== 00:24:34.835 [2024-11-16T16:44:12.326Z] Total : 10981.29 42.90 0.00 0.00 11642.26 1295.83 3035150.89 00:24:34.835 0 00:24:35.094 16:44:12 -- host/timeout.sh@97 -- # rpc_pid=100986 00:24:35.094 16:44:12 -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:35.094 16:44:12 -- host/timeout.sh@98 -- # sleep 1 00:24:35.094 Running I/O for 10 seconds... 00:24:36.029 16:44:13 -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:36.291 [2024-11-16 16:44:13.570936] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1697110 is same with the state(5) to be set 00:24:36.291 [2024-11-16 16:44:13.571686] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1697110 is same with the state(5) to be set 00:24:36.291 [2024-11-16 16:44:13.571775] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1697110 is same with the state(5) to be set 00:24:36.291 [2024-11-16 16:44:13.571834] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1697110 is same with the state(5) to be set 00:24:36.291 [2024-11-16 16:44:13.571886] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1697110 is same with the state(5) to be set 00:24:36.291 [2024-11-16 16:44:13.571943] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1697110 is same with the state(5) to be set 00:24:36.291 [2024-11-16 16:44:13.572000] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1697110 is same with the state(5) to be set 00:24:36.291 [2024-11-16 16:44:13.572086] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1697110 is same with the state(5) to be set 00:24:36.291 [2024-11-16 16:44:13.572166] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1697110 is same with the state(5) to be set 00:24:36.291 [2024-11-16 16:44:13.572225] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1697110 is same with the state(5) to be set 00:24:36.291 [2024-11-16 16:44:13.572280] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1697110 is same with the state(5) to be set 00:24:36.291 [2024-11-16 16:44:13.572334] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1697110 is same with the state(5) to be set 00:24:36.291 [2024-11-16 16:44:13.572404] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1697110 is same with the state(5) to be set 00:24:36.291 [2024-11-16 16:44:13.572455] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1697110 is same with the state(5) to be set 00:24:36.291 [2024-11-16 16:44:13.572508] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1697110 is same with the state(5) to be set 00:24:36.291 [2024-11-16 16:44:13.572559] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1697110 is same with the state(5) to be set 00:24:36.291 [2024-11-16 16:44:13.572610] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1697110 is same with the state(5) to be set 00:24:36.291 [2024-11-16 16:44:13.572667] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1697110 is same with the state(5) to be set 00:24:36.291 [2024-11-16 16:44:13.572719] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1697110 is same with the state(5) to be set 00:24:36.291 [2024-11-16 16:44:13.572770] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1697110 is same with the state(5) to be set 00:24:36.291 [2024-11-16 16:44:13.572821] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1697110 is same with the state(5) to be set 00:24:36.291 [2024-11-16 16:44:13.572865] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1697110 is same with the state(5) to be set 00:24:36.291 [2024-11-16 16:44:13.572913] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1697110 is same with the state(5) to be set 00:24:36.291 [2024-11-16 16:44:13.572964] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1697110 is same with the state(5) to be set 00:24:36.291 [2024-11-16 16:44:13.573016] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1697110 is same with the state(5) to be set 00:24:36.291 [2024-11-16 16:44:13.573106] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1697110 is same with the state(5) to be set 00:24:36.291 [2024-11-16 16:44:13.573204] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1697110 is same with the state(5) to be set 00:24:36.291 [2024-11-16 16:44:13.573290] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1697110 is same with the state(5) to be set 00:24:36.291 [2024-11-16 16:44:13.573347] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1697110 is same with the state(5) to be set 00:24:36.291 [2024-11-16 16:44:13.573416] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1697110 is same with the state(5) to be set 00:24:36.291 [2024-11-16 16:44:13.573501] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1697110 is same with the state(5) to be set 00:24:36.291 [2024-11-16 16:44:13.573558] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1697110 is same with the state(5) to be set 00:24:36.291 [2024-11-16 16:44:13.573614] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1697110 is same with the state(5) to be set 00:24:36.291 [2024-11-16 16:44:13.573665] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1697110 is same with the state(5) to be set 00:24:36.291 [2024-11-16 16:44:13.573717] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1697110 is same with the state(5) to be set 00:24:36.291 [2024-11-16 16:44:13.573768] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1697110 is same with the state(5) to be set 00:24:36.291 [2024-11-16 16:44:13.574189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:11536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.291 [2024-11-16 16:44:13.574235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.291 [2024-11-16 16:44:13.574258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:11552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.291 [2024-11-16 16:44:13.574271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.291 [2024-11-16 16:44:13.574283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:10896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.291 [2024-11-16 16:44:13.574293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.291 [2024-11-16 16:44:13.574305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:10904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.291 [2024-11-16 16:44:13.574314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.291 [2024-11-16 16:44:13.574327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:10912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.291 [2024-11-16 16:44:13.574337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.291 [2024-11-16 16:44:13.574598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:10936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.291 [2024-11-16 16:44:13.574611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.291 [2024-11-16 16:44:13.574621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:10944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.291 [2024-11-16 16:44:13.574629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.291 [2024-11-16 16:44:13.574714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:10952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.292 [2024-11-16 16:44:13.574728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.292 [2024-11-16 16:44:13.574739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.292 [2024-11-16 16:44:13.574748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.292 [2024-11-16 16:44:13.574758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.292 [2024-11-16 16:44:13.574767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.292 [2024-11-16 16:44:13.574777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:11560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.292 [2024-11-16 16:44:13.574896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.292 [2024-11-16 16:44:13.574909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:11568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.292 [2024-11-16 16:44:13.574917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.292 [2024-11-16 16:44:13.575275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:11576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.292 [2024-11-16 16:44:13.575302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.292 [2024-11-16 16:44:13.575316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:11584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.292 [2024-11-16 16:44:13.575326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.292 [2024-11-16 16:44:13.575337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:11592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.292 [2024-11-16 16:44:13.575346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.292 [2024-11-16 16:44:13.575357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:11600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.292 [2024-11-16 16:44:13.575366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.292 [2024-11-16 16:44:13.575377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:11608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.292 [2024-11-16 16:44:13.575394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.292 [2024-11-16 16:44:13.575404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.292 [2024-11-16 16:44:13.575661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.292 [2024-11-16 16:44:13.575676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:11624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.292 [2024-11-16 16:44:13.575684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.292 [2024-11-16 16:44:13.575694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.292 [2024-11-16 16:44:13.575702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.292 [2024-11-16 16:44:13.575713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:11640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.292 [2024-11-16 16:44:13.575721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.292 [2024-11-16 16:44:13.575731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:11648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.292 [2024-11-16 16:44:13.575739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.292 [2024-11-16 16:44:13.575748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:11656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.292 [2024-11-16 16:44:13.575756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.292 [2024-11-16 16:44:13.575872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.292 [2024-11-16 16:44:13.575884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.292 [2024-11-16 16:44:13.576012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:11672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.292 [2024-11-16 16:44:13.576021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.292 [2024-11-16 16:44:13.576271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:11680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.292 [2024-11-16 16:44:13.576291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.292 [2024-11-16 16:44:13.576303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.292 [2024-11-16 16:44:13.576313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.292 [2024-11-16 16:44:13.576324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.292 [2024-11-16 16:44:13.576333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.292 [2024-11-16 16:44:13.576344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:11704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.292 [2024-11-16 16:44:13.576353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.292 [2024-11-16 16:44:13.576364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:11712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.292 [2024-11-16 16:44:13.576373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.292 [2024-11-16 16:44:13.576383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:11720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.292 [2024-11-16 16:44:13.576643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.292 [2024-11-16 16:44:13.576664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.292 [2024-11-16 16:44:13.576673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.292 [2024-11-16 16:44:13.576683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:11736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.292 [2024-11-16 16:44:13.576691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.292 [2024-11-16 16:44:13.576700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:11744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.292 [2024-11-16 16:44:13.576709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.292 [2024-11-16 16:44:13.576824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:10984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.292 [2024-11-16 16:44:13.576838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.292 [2024-11-16 16:44:13.576849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:11008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.292 [2024-11-16 16:44:13.576857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.292 [2024-11-16 16:44:13.577150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:11024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.292 [2024-11-16 16:44:13.577199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.292 [2024-11-16 16:44:13.577215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:11048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.292 [2024-11-16 16:44:13.577225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.292 [2024-11-16 16:44:13.577236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.292 [2024-11-16 16:44:13.577246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.292 [2024-11-16 16:44:13.577257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:11064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.292 [2024-11-16 16:44:13.577266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.292 [2024-11-16 16:44:13.577277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.292 [2024-11-16 16:44:13.577286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.292 [2024-11-16 16:44:13.577297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:11096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.292 [2024-11-16 16:44:13.577393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.292 [2024-11-16 16:44:13.577410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:11112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.292 [2024-11-16 16:44:13.577420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.292 [2024-11-16 16:44:13.577432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:11128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.292 [2024-11-16 16:44:13.577820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.292 [2024-11-16 16:44:13.577842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:11176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.292 [2024-11-16 16:44:13.577851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.292 [2024-11-16 16:44:13.577861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:11184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.292 [2024-11-16 16:44:13.577870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.292 [2024-11-16 16:44:13.577880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.293 [2024-11-16 16:44:13.577888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.293 [2024-11-16 16:44:13.577899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.293 [2024-11-16 16:44:13.577908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.293 [2024-11-16 16:44:13.577917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.293 [2024-11-16 16:44:13.577925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.293 [2024-11-16 16:44:13.577935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.293 [2024-11-16 16:44:13.577944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.293 [2024-11-16 16:44:13.577954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:11752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.293 [2024-11-16 16:44:13.578037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.293 [2024-11-16 16:44:13.578052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:11760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.293 [2024-11-16 16:44:13.578100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.293 [2024-11-16 16:44:13.578354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:11768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.293 [2024-11-16 16:44:13.578376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.293 [2024-11-16 16:44:13.578598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:11776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.293 [2024-11-16 16:44:13.578621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.293 [2024-11-16 16:44:13.578632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:11784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.293 [2024-11-16 16:44:13.578642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.293 [2024-11-16 16:44:13.578653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:11792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.293 [2024-11-16 16:44:13.578661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.293 [2024-11-16 16:44:13.578671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:11800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.293 [2024-11-16 16:44:13.578679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.293 [2024-11-16 16:44:13.578689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:11808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.293 [2024-11-16 16:44:13.578697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.293 [2024-11-16 16:44:13.578707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:11816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.293 [2024-11-16 16:44:13.578793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.293 [2024-11-16 16:44:13.578806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:11824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.293 [2024-11-16 16:44:13.578815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.293 [2024-11-16 16:44:13.578825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:11832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.293 [2024-11-16 16:44:13.578833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.293 [2024-11-16 16:44:13.578843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:11840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.293 [2024-11-16 16:44:13.578852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.293 [2024-11-16 16:44:13.578966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.293 [2024-11-16 16:44:13.578977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.293 [2024-11-16 16:44:13.578987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:11856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.293 [2024-11-16 16:44:13.578995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.293 [2024-11-16 16:44:13.579006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:11864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.293 [2024-11-16 16:44:13.579014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.293 [2024-11-16 16:44:13.579263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:11872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.293 [2024-11-16 16:44:13.579284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.293 [2024-11-16 16:44:13.579295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.293 [2024-11-16 16:44:13.579305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.293 [2024-11-16 16:44:13.579316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:11888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.293 [2024-11-16 16:44:13.579326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.293 [2024-11-16 16:44:13.579337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.293 [2024-11-16 16:44:13.579577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.293 [2024-11-16 16:44:13.579596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:11904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.293 [2024-11-16 16:44:13.579604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.293 [2024-11-16 16:44:13.579615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:11912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.293 [2024-11-16 16:44:13.579624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.293 [2024-11-16 16:44:13.579634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:11920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.293 [2024-11-16 16:44:13.579642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.293 [2024-11-16 16:44:13.579652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:11928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.293 [2024-11-16 16:44:13.579782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.293 [2024-11-16 16:44:13.579910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.293 [2024-11-16 16:44:13.579930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.293 [2024-11-16 16:44:13.580172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:11944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.293 [2024-11-16 16:44:13.580196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.293 [2024-11-16 16:44:13.580209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:11952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.293 [2024-11-16 16:44:13.580219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.293 [2024-11-16 16:44:13.580230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:11256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.293 [2024-11-16 16:44:13.580239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.293 [2024-11-16 16:44:13.580250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:11264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.293 [2024-11-16 16:44:13.580384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.293 [2024-11-16 16:44:13.580609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:11280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.293 [2024-11-16 16:44:13.580621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.293 [2024-11-16 16:44:13.580631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:11288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.293 [2024-11-16 16:44:13.580639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.293 [2024-11-16 16:44:13.580649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:11312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.293 [2024-11-16 16:44:13.580658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.293 [2024-11-16 16:44:13.580881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:11320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.293 [2024-11-16 16:44:13.580904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.293 [2024-11-16 16:44:13.580916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:11328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.293 [2024-11-16 16:44:13.580925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.293 [2024-11-16 16:44:13.580936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:11344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.293 [2024-11-16 16:44:13.580944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.293 [2024-11-16 16:44:13.581219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:11352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.293 [2024-11-16 16:44:13.581244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.293 [2024-11-16 16:44:13.581257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.294 [2024-11-16 16:44:13.581268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.294 [2024-11-16 16:44:13.581279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:11384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.294 [2024-11-16 16:44:13.581289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.294 [2024-11-16 16:44:13.581300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:11416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.294 [2024-11-16 16:44:13.581567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.294 [2024-11-16 16:44:13.581587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:11424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.294 [2024-11-16 16:44:13.581596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.294 [2024-11-16 16:44:13.581607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:11456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.294 [2024-11-16 16:44:13.581615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.294 [2024-11-16 16:44:13.581626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:11464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.294 [2024-11-16 16:44:13.581634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.294 [2024-11-16 16:44:13.581644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.294 [2024-11-16 16:44:13.581900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.294 [2024-11-16 16:44:13.581921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:11960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.294 [2024-11-16 16:44:13.582031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.294 [2024-11-16 16:44:13.582044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:11968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.294 [2024-11-16 16:44:13.582191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.294 [2024-11-16 16:44:13.582306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:11976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.294 [2024-11-16 16:44:13.582318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.294 [2024-11-16 16:44:13.582328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:11984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.294 [2024-11-16 16:44:13.582453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.294 [2024-11-16 16:44:13.582469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:11992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.294 [2024-11-16 16:44:13.582478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.294 [2024-11-16 16:44:13.582616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.294 [2024-11-16 16:44:13.582745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.294 [2024-11-16 16:44:13.582765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.294 [2024-11-16 16:44:13.582868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.294 [2024-11-16 16:44:13.582891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:12016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.294 [2024-11-16 16:44:13.582902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.294 [2024-11-16 16:44:13.582912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:12024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.294 [2024-11-16 16:44:13.583035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.294 [2024-11-16 16:44:13.583048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:12032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.294 [2024-11-16 16:44:13.583283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.294 [2024-11-16 16:44:13.583300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:12040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.294 [2024-11-16 16:44:13.583309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.294 [2024-11-16 16:44:13.583319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.294 [2024-11-16 16:44:13.583327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.294 [2024-11-16 16:44:13.583338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.294 [2024-11-16 16:44:13.583477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.294 [2024-11-16 16:44:13.583612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:12064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.294 [2024-11-16 16:44:13.583701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.294 [2024-11-16 16:44:13.583719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:12072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.294 [2024-11-16 16:44:13.583728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.294 [2024-11-16 16:44:13.583738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:12080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.294 [2024-11-16 16:44:13.583747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.294 [2024-11-16 16:44:13.583757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:12088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.294 [2024-11-16 16:44:13.583981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.294 [2024-11-16 16:44:13.584002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:12096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.294 [2024-11-16 16:44:13.584011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.294 [2024-11-16 16:44:13.584021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:12104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.294 [2024-11-16 16:44:13.584030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.294 [2024-11-16 16:44:13.584040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:12112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.294 [2024-11-16 16:44:13.584048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.294 [2024-11-16 16:44:13.584281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:12120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.294 [2024-11-16 16:44:13.584303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.294 [2024-11-16 16:44:13.584315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:12128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.294 [2024-11-16 16:44:13.584325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.294 [2024-11-16 16:44:13.584335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.294 [2024-11-16 16:44:13.584463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.294 [2024-11-16 16:44:13.584589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:12144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.294 [2024-11-16 16:44:13.584600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.294 [2024-11-16 16:44:13.584838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:12152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.294 [2024-11-16 16:44:13.584853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.294 [2024-11-16 16:44:13.584864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:12160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.294 [2024-11-16 16:44:13.584873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.294 [2024-11-16 16:44:13.584884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:12168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.294 [2024-11-16 16:44:13.584892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.294 [2024-11-16 16:44:13.584902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.294 [2024-11-16 16:44:13.584910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.294 [2024-11-16 16:44:13.585158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:11480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.294 [2024-11-16 16:44:13.585223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.294 [2024-11-16 16:44:13.585237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:11488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.294 [2024-11-16 16:44:13.585246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.294 [2024-11-16 16:44:13.585258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:11496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.294 [2024-11-16 16:44:13.585267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.294 [2024-11-16 16:44:13.585278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:11504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.294 [2024-11-16 16:44:13.585391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.294 [2024-11-16 16:44:13.585406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:11512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.294 [2024-11-16 16:44:13.585651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.294 [2024-11-16 16:44:13.585674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.295 [2024-11-16 16:44:13.585683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.295 [2024-11-16 16:44:13.585693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:11528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.295 [2024-11-16 16:44:13.585701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.295 [2024-11-16 16:44:13.585710] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a61d0 is same with the state(5) to be set 00:24:36.295 [2024-11-16 16:44:13.585722] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:36.295 [2024-11-16 16:44:13.585729] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:36.295 [2024-11-16 16:44:13.585821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11544 len:8 PRP1 0x0 PRP2 0x0 00:24:36.295 [2024-11-16 16:44:13.585836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.295 [2024-11-16 16:44:13.585988] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x11a61d0 was disconnected and freed. reset controller. 00:24:36.295 [2024-11-16 16:44:13.586274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:36.295 [2024-11-16 16:44:13.586301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.295 [2024-11-16 16:44:13.586313] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:36.295 [2024-11-16 16:44:13.586321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.295 [2024-11-16 16:44:13.586329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:36.295 [2024-11-16 16:44:13.586337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.295 [2024-11-16 16:44:13.586346] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:36.295 [2024-11-16 16:44:13.586353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.295 [2024-11-16 16:44:13.586361] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11558c0 is same with the state(5) to be set 00:24:36.295 [2024-11-16 16:44:13.586845] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.295 [2024-11-16 16:44:13.586910] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11558c0 (9): Bad file descriptor 00:24:36.295 [2024-11-16 16:44:13.586999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.295 [2024-11-16 16:44:13.587302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.295 [2024-11-16 16:44:13.587335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11558c0 with addr=10.0.0.2, port=4420 00:24:36.295 [2024-11-16 16:44:13.587346] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11558c0 is same with the state(5) to be set 00:24:36.295 [2024-11-16 16:44:13.587365] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11558c0 (9): Bad file descriptor 00:24:36.295 [2024-11-16 16:44:13.587381] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.295 [2024-11-16 16:44:13.587389] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.295 [2024-11-16 16:44:13.587636] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.295 [2024-11-16 16:44:13.587674] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.295 [2024-11-16 16:44:13.587686] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.295 16:44:13 -- host/timeout.sh@101 -- # sleep 3 00:24:37.231 [2024-11-16 16:44:14.588001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.231 [2024-11-16 16:44:14.588637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.231 [2024-11-16 16:44:14.588764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11558c0 with addr=10.0.0.2, port=4420 00:24:37.231 [2024-11-16 16:44:14.589085] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11558c0 is same with the state(5) to be set 00:24:37.231 [2024-11-16 16:44:14.589202] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11558c0 (9): Bad file descriptor 00:24:37.231 [2024-11-16 16:44:14.589579] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.231 [2024-11-16 16:44:14.589624] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.231 [2024-11-16 16:44:14.589634] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.231 [2024-11-16 16:44:14.589656] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.231 [2024-11-16 16:44:14.589667] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.166 [2024-11-16 16:44:15.589735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.166 [2024-11-16 16:44:15.590460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.166 [2024-11-16 16:44:15.590566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11558c0 with addr=10.0.0.2, port=4420 00:24:38.166 [2024-11-16 16:44:15.590647] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11558c0 is same with the state(5) to be set 00:24:38.166 [2024-11-16 16:44:15.590712] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11558c0 (9): Bad file descriptor 00:24:38.166 [2024-11-16 16:44:15.591222] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.166 [2024-11-16 16:44:15.591417] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.166 [2024-11-16 16:44:15.591719] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.166 [2024-11-16 16:44:15.591845] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.166 [2024-11-16 16:44:15.592190] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.103 [2024-11-16 16:44:16.592546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.362 [2024-11-16 16:44:16.593131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.362 [2024-11-16 16:44:16.593305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11558c0 with addr=10.0.0.2, port=4420 00:24:39.362 [2024-11-16 16:44:16.593708] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11558c0 is same with the state(5) to be set 00:24:39.362 [2024-11-16 16:44:16.594252] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11558c0 (9): Bad file descriptor 00:24:39.362 [2024-11-16 16:44:16.594508] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.362 16:44:16 -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:39.362 [2024-11-16 16:44:16.594924] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.362 [2024-11-16 16:44:16.595012] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.362 [2024-11-16 16:44:16.597522] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.362 [2024-11-16 16:44:16.597652] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.362 [2024-11-16 16:44:16.835769] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:39.621 16:44:16 -- host/timeout.sh@103 -- # wait 100986 00:24:40.188 [2024-11-16 16:44:17.622988] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:45.455 00:24:45.455 Latency(us) 00:24:45.455 [2024-11-16T16:44:22.946Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:45.455 [2024-11-16T16:44:22.946Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:45.455 Verification LBA range: start 0x0 length 0x4000 00:24:45.455 NVMe0n1 : 10.01 9772.88 38.18 7188.83 0.00 7530.80 569.72 3035150.89 00:24:45.455 [2024-11-16T16:44:22.946Z] =================================================================================================================== 00:24:45.455 [2024-11-16T16:44:22.946Z] Total : 9772.88 38.18 7188.83 0.00 7530.80 0.00 3035150.89 00:24:45.455 0 00:24:45.455 16:44:22 -- host/timeout.sh@105 -- # killprocess 100822 00:24:45.455 16:44:22 -- common/autotest_common.sh@936 -- # '[' -z 100822 ']' 00:24:45.455 16:44:22 -- common/autotest_common.sh@940 -- # kill -0 100822 00:24:45.455 16:44:22 -- common/autotest_common.sh@941 -- # uname 00:24:45.455 16:44:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:45.455 16:44:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 100822 00:24:45.455 16:44:22 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:24:45.455 16:44:22 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:24:45.455 killing process with pid 100822 00:24:45.455 16:44:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 100822' 00:24:45.455 16:44:22 -- common/autotest_common.sh@955 -- # kill 100822 00:24:45.455 Received shutdown signal, test time was about 10.000000 seconds 00:24:45.455 00:24:45.455 Latency(us) 00:24:45.455 [2024-11-16T16:44:22.946Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:45.455 [2024-11-16T16:44:22.946Z] =================================================================================================================== 00:24:45.455 [2024-11-16T16:44:22.946Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:45.455 16:44:22 -- common/autotest_common.sh@960 -- # wait 100822 00:24:45.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:45.455 16:44:22 -- host/timeout.sh@110 -- # bdevperf_pid=101111 00:24:45.455 16:44:22 -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:24:45.455 16:44:22 -- host/timeout.sh@112 -- # waitforlisten 101111 /var/tmp/bdevperf.sock 00:24:45.455 16:44:22 -- common/autotest_common.sh@829 -- # '[' -z 101111 ']' 00:24:45.455 16:44:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:45.455 16:44:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:45.455 16:44:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:45.455 16:44:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:45.455 16:44:22 -- common/autotest_common.sh@10 -- # set +x 00:24:45.455 [2024-11-16 16:44:22.760425] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:45.455 [2024-11-16 16:44:22.760513] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101111 ] 00:24:45.455 [2024-11-16 16:44:22.891403] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:45.714 [2024-11-16 16:44:22.957504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:46.281 16:44:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:46.281 16:44:23 -- common/autotest_common.sh@862 -- # return 0 00:24:46.281 16:44:23 -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 101111 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:24:46.281 16:44:23 -- host/timeout.sh@116 -- # dtrace_pid=101135 00:24:46.281 16:44:23 -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:24:46.540 16:44:23 -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:24:46.799 NVMe0n1 00:24:46.799 16:44:24 -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:46.799 16:44:24 -- host/timeout.sh@124 -- # rpc_pid=101194 00:24:46.799 16:44:24 -- host/timeout.sh@125 -- # sleep 1 00:24:47.058 Running I/O for 10 seconds... 00:24:47.998 16:44:25 -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:47.998 [2024-11-16 16:44:25.421791] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169aba0 is same with the state(5) to be set 00:24:47.998 [2024-11-16 16:44:25.421846] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169aba0 is same with the state(5) to be set 00:24:47.998 [2024-11-16 16:44:25.421867] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169aba0 is same with the state(5) to be set 00:24:47.998 [2024-11-16 16:44:25.421876] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169aba0 is same with the state(5) to be set 00:24:47.998 [2024-11-16 16:44:25.421884] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169aba0 is same with the state(5) to be set 00:24:47.998 [2024-11-16 16:44:25.421891] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169aba0 is same with the state(5) to be set 00:24:47.998 [2024-11-16 16:44:25.421899] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169aba0 is same with the state(5) to be set 00:24:47.998 [2024-11-16 16:44:25.421907] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169aba0 is same with the state(5) to be set 00:24:47.999 [2024-11-16 16:44:25.421915] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169aba0 is same with the state(5) to be set 00:24:47.999 [2024-11-16 16:44:25.421922] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169aba0 is same with the state(5) to be set 00:24:47.999 [2024-11-16 16:44:25.421932] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169aba0 is same with the state(5) to be set 00:24:47.999 [2024-11-16 16:44:25.421939] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169aba0 is same with the state(5) to be set 00:24:47.999 [2024-11-16 16:44:25.421947] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169aba0 is same with the state(5) to be set 00:24:47.999 [2024-11-16 16:44:25.421954] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169aba0 is same with the state(5) to be set 00:24:47.999 [2024-11-16 16:44:25.421962] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169aba0 is same with the state(5) to be set 00:24:47.999 [2024-11-16 16:44:25.421970] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169aba0 is same with the state(5) to be set 00:24:47.999 [2024-11-16 16:44:25.421977] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169aba0 is same with the state(5) to be set 00:24:47.999 [2024-11-16 16:44:25.421987] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169aba0 is same with the state(5) to be set 00:24:47.999 [2024-11-16 16:44:25.421994] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169aba0 is same with the state(5) to be set 00:24:47.999 [2024-11-16 16:44:25.422002] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169aba0 is same with the state(5) to be set 00:24:47.999 [2024-11-16 16:44:25.422010] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169aba0 is same with the state(5) to be set 00:24:47.999 [2024-11-16 16:44:25.422018] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169aba0 is same with the state(5) to be set 00:24:47.999 [2024-11-16 16:44:25.422026] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169aba0 is same with the state(5) to be set 00:24:47.999 [2024-11-16 16:44:25.422033] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169aba0 is same with the state(5) to be set 00:24:47.999 [2024-11-16 16:44:25.422050] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169aba0 is same with the state(5) to be set 00:24:47.999 [2024-11-16 16:44:25.422098] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169aba0 is same with the state(5) to be set 00:24:47.999 [2024-11-16 16:44:25.422108] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169aba0 is same with the state(5) to be set 00:24:47.999 [2024-11-16 16:44:25.422124] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169aba0 is same with the state(5) to be set 00:24:47.999 [2024-11-16 16:44:25.422133] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169aba0 is same with the state(5) to be set 00:24:47.999 [2024-11-16 16:44:25.422140] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169aba0 is same with the state(5) to be set 00:24:47.999 [2024-11-16 16:44:25.422149] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169aba0 is same with the state(5) to be set 00:24:47.999 [2024-11-16 16:44:25.422156] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169aba0 is same with the state(5) to be set 00:24:47.999 [2024-11-16 16:44:25.422747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:46792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.999 [2024-11-16 16:44:25.422786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.999 [2024-11-16 16:44:25.422823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:14432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.999 [2024-11-16 16:44:25.422833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.999 [2024-11-16 16:44:25.422843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:36048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.999 [2024-11-16 16:44:25.422853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.999 [2024-11-16 16:44:25.422863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:45080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.999 [2024-11-16 16:44:25.422871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.999 [2024-11-16 16:44:25.422880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.999 [2024-11-16 16:44:25.422888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.999 [2024-11-16 16:44:25.422898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:100984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.999 [2024-11-16 16:44:25.422906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.999 [2024-11-16 16:44:25.422916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:60376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.999 [2024-11-16 16:44:25.423040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.999 [2024-11-16 16:44:25.423099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:67336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.999 [2024-11-16 16:44:25.423111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.999 [2024-11-16 16:44:25.423123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:120736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.999 [2024-11-16 16:44:25.423132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.999 [2024-11-16 16:44:25.423143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:2656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.999 [2024-11-16 16:44:25.423255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.999 [2024-11-16 16:44:25.423270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:34600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.999 [2024-11-16 16:44:25.423280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.999 [2024-11-16 16:44:25.423291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:76944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.999 [2024-11-16 16:44:25.423300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.999 [2024-11-16 16:44:25.423431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:69184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.999 [2024-11-16 16:44:25.423461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.999 [2024-11-16 16:44:25.423472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:32656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.999 [2024-11-16 16:44:25.423480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.999 [2024-11-16 16:44:25.423698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:109240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.999 [2024-11-16 16:44:25.423722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.999 [2024-11-16 16:44:25.423734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:69088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.999 [2024-11-16 16:44:25.423744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.999 [2024-11-16 16:44:25.423754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:110400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.999 [2024-11-16 16:44:25.423763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.999 [2024-11-16 16:44:25.423773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:119024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.999 [2024-11-16 16:44:25.423782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.999 [2024-11-16 16:44:25.423792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:91584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.999 [2024-11-16 16:44:25.423800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.999 [2024-11-16 16:44:25.423810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:65456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.999 [2024-11-16 16:44:25.423818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.999 [2024-11-16 16:44:25.423828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:68024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.999 [2024-11-16 16:44:25.423836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.999 [2024-11-16 16:44:25.423845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:111872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.999 [2024-11-16 16:44:25.424249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.999 [2024-11-16 16:44:25.424279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:6928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.999 [2024-11-16 16:44:25.424290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.999 [2024-11-16 16:44:25.424301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:52792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.999 [2024-11-16 16:44:25.424311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.999 [2024-11-16 16:44:25.424323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:43080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.999 [2024-11-16 16:44:25.424332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.999 [2024-11-16 16:44:25.424343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:41384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.999 [2024-11-16 16:44:25.424352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.999 [2024-11-16 16:44:25.424363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:4376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.999 [2024-11-16 16:44:25.424372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.999 [2024-11-16 16:44:25.424383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:36128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.999 [2024-11-16 16:44:25.424392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.999 [2024-11-16 16:44:25.424675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:33016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.999 [2024-11-16 16:44:25.424806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.999 [2024-11-16 16:44:25.424821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:121960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.999 [2024-11-16 16:44:25.424906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.999 [2024-11-16 16:44:25.424922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:115088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.999 [2024-11-16 16:44:25.424931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.999 [2024-11-16 16:44:25.424942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:34736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.999 [2024-11-16 16:44:25.424951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.999 [2024-11-16 16:44:25.424961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:116056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.999 [2024-11-16 16:44:25.424970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.999 [2024-11-16 16:44:25.425104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:27184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.999 [2024-11-16 16:44:25.425120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.999 [2024-11-16 16:44:25.425134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:18904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.999 [2024-11-16 16:44:25.425150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.999 [2024-11-16 16:44:25.425402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:20624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.999 [2024-11-16 16:44:25.425425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.999 [2024-11-16 16:44:25.425446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:12928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.999 [2024-11-16 16:44:25.425457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-11-16 16:44:25.425469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:89968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-11-16 16:44:25.425479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-11-16 16:44:25.425491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:121552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-11-16 16:44:25.425499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-11-16 16:44:25.425510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:37360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-11-16 16:44:25.425520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-11-16 16:44:25.425531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:73992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-11-16 16:44:25.425540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-11-16 16:44:25.425550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:110624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-11-16 16:44:25.425559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-11-16 16:44:25.425570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:84152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-11-16 16:44:25.425579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-11-16 16:44:25.425714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:73096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-11-16 16:44:25.426006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-11-16 16:44:25.426028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-11-16 16:44:25.426038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-11-16 16:44:25.426163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:40888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-11-16 16:44:25.426176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-11-16 16:44:25.426188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:47984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-11-16 16:44:25.426197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-11-16 16:44:25.426208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:122008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-11-16 16:44:25.426333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-11-16 16:44:25.426475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:93712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-11-16 16:44:25.426561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-11-16 16:44:25.426573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:78112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-11-16 16:44:25.426582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-11-16 16:44:25.426593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:73288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-11-16 16:44:25.426602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-11-16 16:44:25.426613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:65472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-11-16 16:44:25.426621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-11-16 16:44:25.426631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:104232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-11-16 16:44:25.426640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-11-16 16:44:25.426665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:38616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-11-16 16:44:25.426673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-11-16 16:44:25.426683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:45072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-11-16 16:44:25.426691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-11-16 16:44:25.426778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:36664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-11-16 16:44:25.426793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-11-16 16:44:25.426804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:55024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-11-16 16:44:25.426813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-11-16 16:44:25.426823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:99016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-11-16 16:44:25.426959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-11-16 16:44:25.426977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-11-16 16:44:25.426986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-11-16 16:44:25.427254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:47808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-11-16 16:44:25.427268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-11-16 16:44:25.427279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:24448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-11-16 16:44:25.427289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-11-16 16:44:25.427300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:87664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-11-16 16:44:25.427309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-11-16 16:44:25.427331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:32072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-11-16 16:44:25.427340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-11-16 16:44:25.427351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:92544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-11-16 16:44:25.427360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-11-16 16:44:25.427371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:7776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-11-16 16:44:25.427380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-11-16 16:44:25.427623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:53944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-11-16 16:44:25.427637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-11-16 16:44:25.427649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:108512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-11-16 16:44:25.427658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-11-16 16:44:25.427777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:51768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-11-16 16:44:25.427798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-11-16 16:44:25.427810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:93352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-11-16 16:44:25.427918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-11-16 16:44:25.427931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:92192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-11-16 16:44:25.427940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-11-16 16:44:25.427949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:24512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-11-16 16:44:25.428182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-11-16 16:44:25.428198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:49936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-11-16 16:44:25.428207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-11-16 16:44:25.428219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:122616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-11-16 16:44:25.428228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-11-16 16:44:25.428239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:116376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-11-16 16:44:25.428249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-11-16 16:44:25.428260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:87352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-11-16 16:44:25.428269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-11-16 16:44:25.428280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:98752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-11-16 16:44:25.428290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-11-16 16:44:25.428301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:62800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-11-16 16:44:25.428310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-11-16 16:44:25.428551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:93592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-11-16 16:44:25.428569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-11-16 16:44:25.428580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:60440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-11-16 16:44:25.428589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-11-16 16:44:25.428713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:68480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-11-16 16:44:25.428731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-11-16 16:44:25.428832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:19280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-11-16 16:44:25.428851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-11-16 16:44:25.428862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:72704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-11-16 16:44:25.428872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-11-16 16:44:25.428951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:3008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-11-16 16:44:25.428965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-11-16 16:44:25.428976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:101632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-11-16 16:44:25.428984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-11-16 16:44:25.428995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-11-16 16:44:25.429003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-11-16 16:44:25.429013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:122968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-11-16 16:44:25.429025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-11-16 16:44:25.429035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:52984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.000 [2024-11-16 16:44:25.429258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.000 [2024-11-16 16:44:25.429275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:60024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-11-16 16:44:25.429285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-11-16 16:44:25.429296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:19984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-11-16 16:44:25.429305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-11-16 16:44:25.429317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-11-16 16:44:25.429565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-11-16 16:44:25.429594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-11-16 16:44:25.429603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-11-16 16:44:25.429613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:85904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-11-16 16:44:25.429622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-11-16 16:44:25.429632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:109432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-11-16 16:44:25.429726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-11-16 16:44:25.429746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-11-16 16:44:25.429756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-11-16 16:44:25.429766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:27888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-11-16 16:44:25.429775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-11-16 16:44:25.429876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:49032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-11-16 16:44:25.429887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-11-16 16:44:25.429898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:125408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-11-16 16:44:25.429906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-11-16 16:44:25.429916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:121336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-11-16 16:44:25.430168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-11-16 16:44:25.430189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:20192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-11-16 16:44:25.430199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-11-16 16:44:25.430210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:91416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-11-16 16:44:25.430221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-11-16 16:44:25.430233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:130632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-11-16 16:44:25.430242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-11-16 16:44:25.430253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:117904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-11-16 16:44:25.430350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-11-16 16:44:25.430364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:17064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-11-16 16:44:25.430374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-11-16 16:44:25.430384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:74344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-11-16 16:44:25.430622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-11-16 16:44:25.430644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:121176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-11-16 16:44:25.430654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-11-16 16:44:25.430665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:21960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-11-16 16:44:25.430674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-11-16 16:44:25.430684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:54456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-11-16 16:44:25.430692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-11-16 16:44:25.430702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:16688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-11-16 16:44:25.430711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-11-16 16:44:25.430812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:55056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-11-16 16:44:25.430825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-11-16 16:44:25.430835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:28968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-11-16 16:44:25.430844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-11-16 16:44:25.430854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:42184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-11-16 16:44:25.430973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-11-16 16:44:25.430996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-11-16 16:44:25.431211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-11-16 16:44:25.431233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:53008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-11-16 16:44:25.431245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-11-16 16:44:25.431256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-11-16 16:44:25.431265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-11-16 16:44:25.431275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:30264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-11-16 16:44:25.431285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-11-16 16:44:25.431295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:12160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-11-16 16:44:25.431412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-11-16 16:44:25.431427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:58464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-11-16 16:44:25.431452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-11-16 16:44:25.431657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:104872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-11-16 16:44:25.431676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-11-16 16:44:25.431689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:16344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-11-16 16:44:25.431697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-11-16 16:44:25.431708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:100328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-11-16 16:44:25.431716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-11-16 16:44:25.431726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:54696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-11-16 16:44:25.431734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-11-16 16:44:25.431745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:31528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-11-16 16:44:25.431854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-11-16 16:44:25.431871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-11-16 16:44:25.431880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-11-16 16:44:25.431891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:13216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-11-16 16:44:25.432004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-11-16 16:44:25.432018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:27896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-11-16 16:44:25.432027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-11-16 16:44:25.432159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:87960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-11-16 16:44:25.432289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-11-16 16:44:25.432410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:112792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.001 [2024-11-16 16:44:25.432430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-11-16 16:44:25.432584] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:48.001 [2024-11-16 16:44:25.432700] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:48.001 [2024-11-16 16:44:25.432717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29136 len:8 PRP1 0x0 PRP2 0x0 00:24:48.001 [2024-11-16 16:44:25.432726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-11-16 16:44:25.432965] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1cb6780 was disconnected and freed. reset controller. 00:24:48.001 [2024-11-16 16:44:25.433208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:48.001 [2024-11-16 16:44:25.433232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-11-16 16:44:25.433243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:48.001 [2024-11-16 16:44:25.433252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-11-16 16:44:25.433263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:48.001 [2024-11-16 16:44:25.433272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-11-16 16:44:25.433281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:48.001 [2024-11-16 16:44:25.433290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.001 [2024-11-16 16:44:25.433382] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c318c0 is same with the state(5) to be set 00:24:48.001 [2024-11-16 16:44:25.433861] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:48.001 [2024-11-16 16:44:25.433909] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c318c0 (9): Bad file descriptor 00:24:48.001 [2024-11-16 16:44:25.434219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.001 [2024-11-16 16:44:25.434291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.001 [2024-11-16 16:44:25.434308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c318c0 with addr=10.0.0.2, port=4420 00:24:48.001 [2024-11-16 16:44:25.434319] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c318c0 is same with the state(5) to be set 00:24:48.001 [2024-11-16 16:44:25.434443] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c318c0 (9): Bad file descriptor 00:24:48.001 [2024-11-16 16:44:25.434561] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:48.002 [2024-11-16 16:44:25.434573] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:48.002 [2024-11-16 16:44:25.434582] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:48.002 [2024-11-16 16:44:25.434603] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:48.002 [2024-11-16 16:44:25.434825] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:48.002 16:44:25 -- host/timeout.sh@128 -- # wait 101194 00:24:50.546 [2024-11-16 16:44:27.434918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.546 [2024-11-16 16:44:27.435015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.546 [2024-11-16 16:44:27.435032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c318c0 with addr=10.0.0.2, port=4420 00:24:50.546 [2024-11-16 16:44:27.435042] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c318c0 is same with the state(5) to be set 00:24:50.546 [2024-11-16 16:44:27.435061] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c318c0 (9): Bad file descriptor 00:24:50.546 [2024-11-16 16:44:27.435087] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.546 [2024-11-16 16:44:27.435097] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.546 [2024-11-16 16:44:27.435106] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.546 [2024-11-16 16:44:27.435125] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.546 [2024-11-16 16:44:27.435134] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.994 [2024-11-16 16:44:29.435260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.994 [2024-11-16 16:44:29.435373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.994 [2024-11-16 16:44:29.435391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c318c0 with addr=10.0.0.2, port=4420 00:24:51.994 [2024-11-16 16:44:29.435403] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c318c0 is same with the state(5) to be set 00:24:51.994 [2024-11-16 16:44:29.435424] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c318c0 (9): Bad file descriptor 00:24:51.994 [2024-11-16 16:44:29.435441] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.994 [2024-11-16 16:44:29.435450] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.994 [2024-11-16 16:44:29.435474] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.994 [2024-11-16 16:44:29.435497] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.994 [2024-11-16 16:44:29.435507] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.524 [2024-11-16 16:44:31.435550] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.524 [2024-11-16 16:44:31.435604] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.524 [2024-11-16 16:44:31.435632] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.524 [2024-11-16 16:44:31.435640] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:24:54.524 [2024-11-16 16:44:31.435664] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.090 00:24:55.090 Latency(us) 00:24:55.090 [2024-11-16T16:44:32.581Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:55.090 [2024-11-16T16:44:32.581Z] Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:24:55.090 NVMe0n1 : 8.12 3168.62 12.38 15.76 0.00 40143.24 2800.17 7046430.72 00:24:55.090 [2024-11-16T16:44:32.581Z] =================================================================================================================== 00:24:55.090 [2024-11-16T16:44:32.581Z] Total : 3168.62 12.38 15.76 0.00 40143.24 2800.17 7046430.72 00:24:55.090 0 00:24:55.090 16:44:32 -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:55.090 Attaching 5 probes... 00:24:55.090 1216.599014: reset bdev controller NVMe0 00:24:55.090 1216.690969: reconnect bdev controller NVMe0 00:24:55.090 3217.598371: reconnect delay bdev controller NVMe0 00:24:55.090 3217.613180: reconnect bdev controller NVMe0 00:24:55.090 5217.900932: reconnect delay bdev controller NVMe0 00:24:55.090 5217.935111: reconnect bdev controller NVMe0 00:24:55.090 7218.275440: reconnect delay bdev controller NVMe0 00:24:55.090 7218.292003: reconnect bdev controller NVMe0 00:24:55.090 16:44:32 -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:24:55.090 16:44:32 -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:24:55.090 16:44:32 -- host/timeout.sh@136 -- # kill 101135 00:24:55.090 16:44:32 -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:55.090 16:44:32 -- host/timeout.sh@139 -- # killprocess 101111 00:24:55.090 16:44:32 -- common/autotest_common.sh@936 -- # '[' -z 101111 ']' 00:24:55.090 16:44:32 -- common/autotest_common.sh@940 -- # kill -0 101111 00:24:55.090 16:44:32 -- common/autotest_common.sh@941 -- # uname 00:24:55.090 16:44:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:55.090 16:44:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 101111 00:24:55.090 killing process with pid 101111 00:24:55.090 Received shutdown signal, test time was about 8.197059 seconds 00:24:55.090 00:24:55.090 Latency(us) 00:24:55.090 [2024-11-16T16:44:32.581Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:55.090 [2024-11-16T16:44:32.581Z] =================================================================================================================== 00:24:55.090 [2024-11-16T16:44:32.581Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:55.090 16:44:32 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:24:55.090 16:44:32 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:24:55.090 16:44:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 101111' 00:24:55.090 16:44:32 -- common/autotest_common.sh@955 -- # kill 101111 00:24:55.090 16:44:32 -- common/autotest_common.sh@960 -- # wait 101111 00:24:55.349 16:44:32 -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:55.608 16:44:32 -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:24:55.608 16:44:32 -- host/timeout.sh@145 -- # nvmftestfini 00:24:55.608 16:44:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:55.608 16:44:32 -- nvmf/common.sh@116 -- # sync 00:24:55.608 16:44:33 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:55.608 16:44:33 -- nvmf/common.sh@119 -- # set +e 00:24:55.608 16:44:33 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:55.608 16:44:33 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:55.608 rmmod nvme_tcp 00:24:55.608 rmmod nvme_fabrics 00:24:55.608 rmmod nvme_keyring 00:24:55.608 16:44:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:55.608 16:44:33 -- nvmf/common.sh@123 -- # set -e 00:24:55.608 16:44:33 -- nvmf/common.sh@124 -- # return 0 00:24:55.608 16:44:33 -- nvmf/common.sh@477 -- # '[' -n 100524 ']' 00:24:55.608 16:44:33 -- nvmf/common.sh@478 -- # killprocess 100524 00:24:55.608 16:44:33 -- common/autotest_common.sh@936 -- # '[' -z 100524 ']' 00:24:55.608 16:44:33 -- common/autotest_common.sh@940 -- # kill -0 100524 00:24:55.608 16:44:33 -- common/autotest_common.sh@941 -- # uname 00:24:55.608 16:44:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:55.867 16:44:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 100524 00:24:55.867 killing process with pid 100524 00:24:55.867 16:44:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:55.867 16:44:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:55.867 16:44:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 100524' 00:24:55.867 16:44:33 -- common/autotest_common.sh@955 -- # kill 100524 00:24:55.867 16:44:33 -- common/autotest_common.sh@960 -- # wait 100524 00:24:56.125 16:44:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:56.125 16:44:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:56.125 16:44:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:56.125 16:44:33 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:56.125 16:44:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:56.125 16:44:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:56.125 16:44:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:56.125 16:44:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:56.125 16:44:33 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:24:56.125 00:24:56.125 real 0m46.732s 00:24:56.125 user 2m15.613s 00:24:56.125 sys 0m5.615s 00:24:56.125 16:44:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:56.125 16:44:33 -- common/autotest_common.sh@10 -- # set +x 00:24:56.125 ************************************ 00:24:56.125 END TEST nvmf_timeout 00:24:56.125 ************************************ 00:24:56.125 16:44:33 -- nvmf/nvmf.sh@120 -- # [[ virt == phy ]] 00:24:56.125 16:44:33 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:24:56.125 16:44:33 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:56.125 16:44:33 -- common/autotest_common.sh@10 -- # set +x 00:24:56.125 16:44:33 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:24:56.125 00:24:56.125 real 17m30.882s 00:24:56.125 user 55m38.868s 00:24:56.125 sys 3m44.264s 00:24:56.125 16:44:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:56.125 16:44:33 -- common/autotest_common.sh@10 -- # set +x 00:24:56.125 ************************************ 00:24:56.125 END TEST nvmf_tcp 00:24:56.125 ************************************ 00:24:56.384 16:44:33 -- spdk/autotest.sh@283 -- # [[ 0 -eq 0 ]] 00:24:56.384 16:44:33 -- spdk/autotest.sh@284 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:24:56.384 16:44:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:56.384 16:44:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:56.384 16:44:33 -- common/autotest_common.sh@10 -- # set +x 00:24:56.384 ************************************ 00:24:56.384 START TEST spdkcli_nvmf_tcp 00:24:56.384 ************************************ 00:24:56.384 16:44:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:24:56.384 * Looking for test storage... 00:24:56.384 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:24:56.384 16:44:33 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:56.384 16:44:33 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:56.384 16:44:33 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:56.384 16:44:33 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:56.384 16:44:33 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:56.384 16:44:33 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:56.384 16:44:33 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:56.384 16:44:33 -- scripts/common.sh@335 -- # IFS=.-: 00:24:56.384 16:44:33 -- scripts/common.sh@335 -- # read -ra ver1 00:24:56.384 16:44:33 -- scripts/common.sh@336 -- # IFS=.-: 00:24:56.384 16:44:33 -- scripts/common.sh@336 -- # read -ra ver2 00:24:56.384 16:44:33 -- scripts/common.sh@337 -- # local 'op=<' 00:24:56.384 16:44:33 -- scripts/common.sh@339 -- # ver1_l=2 00:24:56.384 16:44:33 -- scripts/common.sh@340 -- # ver2_l=1 00:24:56.384 16:44:33 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:56.384 16:44:33 -- scripts/common.sh@343 -- # case "$op" in 00:24:56.384 16:44:33 -- scripts/common.sh@344 -- # : 1 00:24:56.384 16:44:33 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:56.384 16:44:33 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:56.384 16:44:33 -- scripts/common.sh@364 -- # decimal 1 00:24:56.384 16:44:33 -- scripts/common.sh@352 -- # local d=1 00:24:56.384 16:44:33 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:56.384 16:44:33 -- scripts/common.sh@354 -- # echo 1 00:24:56.384 16:44:33 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:56.384 16:44:33 -- scripts/common.sh@365 -- # decimal 2 00:24:56.384 16:44:33 -- scripts/common.sh@352 -- # local d=2 00:24:56.384 16:44:33 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:56.384 16:44:33 -- scripts/common.sh@354 -- # echo 2 00:24:56.384 16:44:33 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:56.384 16:44:33 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:56.384 16:44:33 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:56.384 16:44:33 -- scripts/common.sh@367 -- # return 0 00:24:56.384 16:44:33 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:56.384 16:44:33 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:56.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:56.384 --rc genhtml_branch_coverage=1 00:24:56.384 --rc genhtml_function_coverage=1 00:24:56.384 --rc genhtml_legend=1 00:24:56.384 --rc geninfo_all_blocks=1 00:24:56.384 --rc geninfo_unexecuted_blocks=1 00:24:56.384 00:24:56.384 ' 00:24:56.385 16:44:33 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:56.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:56.385 --rc genhtml_branch_coverage=1 00:24:56.385 --rc genhtml_function_coverage=1 00:24:56.385 --rc genhtml_legend=1 00:24:56.385 --rc geninfo_all_blocks=1 00:24:56.385 --rc geninfo_unexecuted_blocks=1 00:24:56.385 00:24:56.385 ' 00:24:56.385 16:44:33 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:56.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:56.385 --rc genhtml_branch_coverage=1 00:24:56.385 --rc genhtml_function_coverage=1 00:24:56.385 --rc genhtml_legend=1 00:24:56.385 --rc geninfo_all_blocks=1 00:24:56.385 --rc geninfo_unexecuted_blocks=1 00:24:56.385 00:24:56.385 ' 00:24:56.385 16:44:33 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:56.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:56.385 --rc genhtml_branch_coverage=1 00:24:56.385 --rc genhtml_function_coverage=1 00:24:56.385 --rc genhtml_legend=1 00:24:56.385 --rc geninfo_all_blocks=1 00:24:56.385 --rc geninfo_unexecuted_blocks=1 00:24:56.385 00:24:56.385 ' 00:24:56.385 16:44:33 -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:24:56.385 16:44:33 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:24:56.385 16:44:33 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:24:56.385 16:44:33 -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:56.385 16:44:33 -- nvmf/common.sh@7 -- # uname -s 00:24:56.385 16:44:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:56.385 16:44:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:56.385 16:44:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:56.385 16:44:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:56.385 16:44:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:56.385 16:44:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:56.385 16:44:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:56.385 16:44:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:56.385 16:44:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:56.385 16:44:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:56.385 16:44:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:24:56.385 16:44:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:24:56.385 16:44:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:56.385 16:44:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:56.385 16:44:33 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:56.385 16:44:33 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:56.385 16:44:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:56.385 16:44:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:56.385 16:44:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:56.385 16:44:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.385 16:44:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.385 16:44:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.385 16:44:33 -- paths/export.sh@5 -- # export PATH 00:24:56.385 16:44:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.385 16:44:33 -- nvmf/common.sh@46 -- # : 0 00:24:56.385 16:44:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:56.385 16:44:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:56.385 16:44:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:56.385 16:44:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:56.385 16:44:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:56.385 16:44:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:56.385 16:44:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:56.385 16:44:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:56.385 16:44:33 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:24:56.385 16:44:33 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:24:56.385 16:44:33 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:24:56.385 16:44:33 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:24:56.385 16:44:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:56.385 16:44:33 -- common/autotest_common.sh@10 -- # set +x 00:24:56.385 16:44:33 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:24:56.385 16:44:33 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=101420 00:24:56.385 16:44:33 -- spdkcli/common.sh@34 -- # waitforlisten 101420 00:24:56.385 16:44:33 -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:24:56.385 16:44:33 -- common/autotest_common.sh@829 -- # '[' -z 101420 ']' 00:24:56.385 16:44:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:56.385 16:44:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:56.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:56.385 16:44:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:56.385 16:44:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:56.385 16:44:33 -- common/autotest_common.sh@10 -- # set +x 00:24:56.644 [2024-11-16 16:44:33.906190] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:56.644 [2024-11-16 16:44:33.906292] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101420 ] 00:24:56.644 [2024-11-16 16:44:34.043164] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:56.644 [2024-11-16 16:44:34.115690] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:56.644 [2024-11-16 16:44:34.116007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:56.644 [2024-11-16 16:44:34.116033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:57.580 16:44:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:57.580 16:44:34 -- common/autotest_common.sh@862 -- # return 0 00:24:57.580 16:44:34 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:24:57.580 16:44:34 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:57.580 16:44:34 -- common/autotest_common.sh@10 -- # set +x 00:24:57.580 16:44:34 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:24:57.580 16:44:34 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:24:57.580 16:44:34 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:24:57.580 16:44:34 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:57.580 16:44:34 -- common/autotest_common.sh@10 -- # set +x 00:24:57.580 16:44:34 -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:24:57.580 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:24:57.580 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:24:57.580 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:24:57.580 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:24:57.580 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:24:57.580 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:24:57.580 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:24:57.580 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:24:57.580 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:24:57.580 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:24:57.580 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:57.580 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:24:57.580 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:24:57.580 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:57.580 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:24:57.580 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:24:57.580 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:24:57.580 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:24:57.580 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:57.580 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:24:57.580 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:24:57.580 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:24:57.580 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:24:57.580 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:57.580 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:24:57.580 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:24:57.580 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:24:57.580 ' 00:24:58.147 [2024-11-16 16:44:35.377431] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:00.680 [2024-11-16 16:44:37.633197] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:01.615 [2024-11-16 16:44:38.922997] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:25:04.152 [2024-11-16 16:44:41.305837] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:25:06.055 [2024-11-16 16:44:43.356214] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:25:07.959 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:25:07.959 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:25:07.959 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:25:07.959 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:25:07.959 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:25:07.959 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:25:07.959 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:25:07.959 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:07.959 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:25:07.959 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:25:07.959 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:07.959 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:07.959 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:25:07.959 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:07.959 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:07.959 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:25:07.959 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:07.959 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:07.959 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:07.959 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:07.959 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:25:07.959 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:25:07.959 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:07.959 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:25:07.959 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:07.959 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:25:07.959 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:25:07.959 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:25:07.959 16:44:45 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:25:07.959 16:44:45 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:07.959 16:44:45 -- common/autotest_common.sh@10 -- # set +x 00:25:07.959 16:44:45 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:25:07.959 16:44:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:07.959 16:44:45 -- common/autotest_common.sh@10 -- # set +x 00:25:07.960 16:44:45 -- spdkcli/nvmf.sh@69 -- # check_match 00:25:07.960 16:44:45 -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:25:08.218 16:44:45 -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:25:08.218 16:44:45 -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:25:08.218 16:44:45 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:25:08.218 16:44:45 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:08.218 16:44:45 -- common/autotest_common.sh@10 -- # set +x 00:25:08.218 16:44:45 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:25:08.218 16:44:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:08.218 16:44:45 -- common/autotest_common.sh@10 -- # set +x 00:25:08.218 16:44:45 -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:25:08.218 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:25:08.218 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:08.218 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:25:08.218 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:25:08.218 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:25:08.218 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:25:08.218 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:08.218 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:25:08.219 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:25:08.219 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:25:08.219 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:25:08.219 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:25:08.219 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:25:08.219 ' 00:25:14.786 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:25:14.786 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:25:14.786 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:14.786 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:25:14.786 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:25:14.786 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:25:14.786 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:25:14.786 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:14.786 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:25:14.786 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:25:14.786 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:25:14.786 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:25:14.786 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:25:14.786 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:25:14.786 16:44:51 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:25:14.786 16:44:51 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:14.786 16:44:51 -- common/autotest_common.sh@10 -- # set +x 00:25:14.786 16:44:51 -- spdkcli/nvmf.sh@90 -- # killprocess 101420 00:25:14.786 16:44:51 -- common/autotest_common.sh@936 -- # '[' -z 101420 ']' 00:25:14.786 16:44:51 -- common/autotest_common.sh@940 -- # kill -0 101420 00:25:14.786 16:44:51 -- common/autotest_common.sh@941 -- # uname 00:25:14.786 16:44:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:14.786 16:44:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 101420 00:25:14.786 killing process with pid 101420 00:25:14.786 16:44:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:14.786 16:44:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:14.786 16:44:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 101420' 00:25:14.786 16:44:51 -- common/autotest_common.sh@955 -- # kill 101420 00:25:14.786 [2024-11-16 16:44:51.288926] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:14.786 16:44:51 -- common/autotest_common.sh@960 -- # wait 101420 00:25:14.786 16:44:51 -- spdkcli/nvmf.sh@1 -- # cleanup 00:25:14.786 16:44:51 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:25:14.786 16:44:51 -- spdkcli/common.sh@13 -- # '[' -n 101420 ']' 00:25:14.786 16:44:51 -- spdkcli/common.sh@14 -- # killprocess 101420 00:25:14.786 16:44:51 -- common/autotest_common.sh@936 -- # '[' -z 101420 ']' 00:25:14.786 16:44:51 -- common/autotest_common.sh@940 -- # kill -0 101420 00:25:14.786 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (101420) - No such process 00:25:14.786 Process with pid 101420 is not found 00:25:14.786 16:44:51 -- common/autotest_common.sh@963 -- # echo 'Process with pid 101420 is not found' 00:25:14.786 16:44:51 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:25:14.786 16:44:51 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:25:14.786 16:44:51 -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:25:14.786 00:25:14.786 real 0m17.921s 00:25:14.786 user 0m38.796s 00:25:14.786 sys 0m0.952s 00:25:14.786 16:44:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:14.786 16:44:51 -- common/autotest_common.sh@10 -- # set +x 00:25:14.786 ************************************ 00:25:14.786 END TEST spdkcli_nvmf_tcp 00:25:14.786 ************************************ 00:25:14.786 16:44:51 -- spdk/autotest.sh@285 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:14.786 16:44:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:14.786 16:44:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:14.786 16:44:51 -- common/autotest_common.sh@10 -- # set +x 00:25:14.786 ************************************ 00:25:14.786 START TEST nvmf_identify_passthru 00:25:14.786 ************************************ 00:25:14.786 16:44:51 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:14.786 * Looking for test storage... 00:25:14.786 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:14.786 16:44:51 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:25:14.786 16:44:51 -- common/autotest_common.sh@1690 -- # lcov --version 00:25:14.786 16:44:51 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:25:14.786 16:44:51 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:25:14.786 16:44:51 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:25:14.786 16:44:51 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:14.786 16:44:51 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:14.786 16:44:51 -- scripts/common.sh@335 -- # IFS=.-: 00:25:14.786 16:44:51 -- scripts/common.sh@335 -- # read -ra ver1 00:25:14.786 16:44:51 -- scripts/common.sh@336 -- # IFS=.-: 00:25:14.786 16:44:51 -- scripts/common.sh@336 -- # read -ra ver2 00:25:14.786 16:44:51 -- scripts/common.sh@337 -- # local 'op=<' 00:25:14.786 16:44:51 -- scripts/common.sh@339 -- # ver1_l=2 00:25:14.786 16:44:51 -- scripts/common.sh@340 -- # ver2_l=1 00:25:14.786 16:44:51 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:14.786 16:44:51 -- scripts/common.sh@343 -- # case "$op" in 00:25:14.786 16:44:51 -- scripts/common.sh@344 -- # : 1 00:25:14.786 16:44:51 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:14.786 16:44:51 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:14.787 16:44:51 -- scripts/common.sh@364 -- # decimal 1 00:25:14.787 16:44:51 -- scripts/common.sh@352 -- # local d=1 00:25:14.787 16:44:51 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:14.787 16:44:51 -- scripts/common.sh@354 -- # echo 1 00:25:14.787 16:44:51 -- scripts/common.sh@364 -- # ver1[v]=1 00:25:14.787 16:44:51 -- scripts/common.sh@365 -- # decimal 2 00:25:14.787 16:44:51 -- scripts/common.sh@352 -- # local d=2 00:25:14.787 16:44:51 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:14.787 16:44:51 -- scripts/common.sh@354 -- # echo 2 00:25:14.787 16:44:51 -- scripts/common.sh@365 -- # ver2[v]=2 00:25:14.787 16:44:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:14.787 16:44:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:14.787 16:44:51 -- scripts/common.sh@367 -- # return 0 00:25:14.787 16:44:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:14.787 16:44:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:25:14.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:14.787 --rc genhtml_branch_coverage=1 00:25:14.787 --rc genhtml_function_coverage=1 00:25:14.787 --rc genhtml_legend=1 00:25:14.787 --rc geninfo_all_blocks=1 00:25:14.787 --rc geninfo_unexecuted_blocks=1 00:25:14.787 00:25:14.787 ' 00:25:14.787 16:44:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:25:14.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:14.787 --rc genhtml_branch_coverage=1 00:25:14.787 --rc genhtml_function_coverage=1 00:25:14.787 --rc genhtml_legend=1 00:25:14.787 --rc geninfo_all_blocks=1 00:25:14.787 --rc geninfo_unexecuted_blocks=1 00:25:14.787 00:25:14.787 ' 00:25:14.787 16:44:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:25:14.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:14.787 --rc genhtml_branch_coverage=1 00:25:14.787 --rc genhtml_function_coverage=1 00:25:14.787 --rc genhtml_legend=1 00:25:14.787 --rc geninfo_all_blocks=1 00:25:14.787 --rc geninfo_unexecuted_blocks=1 00:25:14.787 00:25:14.787 ' 00:25:14.787 16:44:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:25:14.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:14.787 --rc genhtml_branch_coverage=1 00:25:14.787 --rc genhtml_function_coverage=1 00:25:14.787 --rc genhtml_legend=1 00:25:14.787 --rc geninfo_all_blocks=1 00:25:14.787 --rc geninfo_unexecuted_blocks=1 00:25:14.787 00:25:14.787 ' 00:25:14.787 16:44:51 -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:14.787 16:44:51 -- nvmf/common.sh@7 -- # uname -s 00:25:14.787 16:44:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:14.787 16:44:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:14.787 16:44:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:14.787 16:44:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:14.787 16:44:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:14.787 16:44:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:14.787 16:44:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:14.787 16:44:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:14.787 16:44:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:14.787 16:44:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:14.787 16:44:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:25:14.787 16:44:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:25:14.787 16:44:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:14.787 16:44:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:14.787 16:44:51 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:14.787 16:44:51 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:14.787 16:44:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:14.787 16:44:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:14.787 16:44:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:14.787 16:44:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.787 16:44:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.787 16:44:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.787 16:44:51 -- paths/export.sh@5 -- # export PATH 00:25:14.787 16:44:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.787 16:44:51 -- nvmf/common.sh@46 -- # : 0 00:25:14.787 16:44:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:14.787 16:44:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:14.787 16:44:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:14.787 16:44:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:14.787 16:44:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:14.787 16:44:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:14.787 16:44:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:14.787 16:44:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:14.787 16:44:51 -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:14.787 16:44:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:14.787 16:44:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:14.787 16:44:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:14.787 16:44:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.787 16:44:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.787 16:44:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.787 16:44:51 -- paths/export.sh@5 -- # export PATH 00:25:14.787 16:44:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.787 16:44:51 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:25:14.787 16:44:51 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:14.787 16:44:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:14.787 16:44:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:14.787 16:44:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:14.787 16:44:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:14.787 16:44:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:14.787 16:44:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:14.787 16:44:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:14.787 16:44:51 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:25:14.787 16:44:51 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:25:14.787 16:44:51 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:25:14.787 16:44:51 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:25:14.787 16:44:51 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:25:14.787 16:44:51 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:25:14.787 16:44:51 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:14.787 16:44:51 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:14.787 16:44:51 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:14.787 16:44:51 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:25:14.787 16:44:51 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:14.787 16:44:51 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:14.787 16:44:51 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:14.787 16:44:51 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:14.787 16:44:51 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:14.787 16:44:51 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:14.787 16:44:51 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:14.787 16:44:51 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:14.787 16:44:51 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:25:14.787 16:44:51 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:25:14.787 Cannot find device "nvmf_tgt_br" 00:25:14.787 16:44:51 -- nvmf/common.sh@154 -- # true 00:25:14.787 16:44:51 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:25:14.787 Cannot find device "nvmf_tgt_br2" 00:25:14.787 16:44:51 -- nvmf/common.sh@155 -- # true 00:25:14.787 16:44:51 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:25:14.787 16:44:51 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:25:14.787 Cannot find device "nvmf_tgt_br" 00:25:14.787 16:44:51 -- nvmf/common.sh@157 -- # true 00:25:14.787 16:44:51 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:25:14.787 Cannot find device "nvmf_tgt_br2" 00:25:14.787 16:44:51 -- nvmf/common.sh@158 -- # true 00:25:14.788 16:44:51 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:25:14.788 16:44:51 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:25:14.788 16:44:51 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:14.788 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:14.788 16:44:51 -- nvmf/common.sh@161 -- # true 00:25:14.788 16:44:51 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:14.788 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:14.788 16:44:51 -- nvmf/common.sh@162 -- # true 00:25:14.788 16:44:51 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:25:14.788 16:44:51 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:14.788 16:44:51 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:14.788 16:44:51 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:14.788 16:44:51 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:14.788 16:44:52 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:14.788 16:44:52 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:14.788 16:44:52 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:14.788 16:44:52 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:14.788 16:44:52 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:25:14.788 16:44:52 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:25:14.788 16:44:52 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:25:14.788 16:44:52 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:25:14.788 16:44:52 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:14.788 16:44:52 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:14.788 16:44:52 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:14.788 16:44:52 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:25:14.788 16:44:52 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:25:14.788 16:44:52 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:25:14.788 16:44:52 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:14.788 16:44:52 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:14.788 16:44:52 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:14.788 16:44:52 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:14.788 16:44:52 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:25:14.788 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:14.788 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:25:14.788 00:25:14.788 --- 10.0.0.2 ping statistics --- 00:25:14.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:14.788 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:25:14.788 16:44:52 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:25:14.788 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:14.788 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:25:14.788 00:25:14.788 --- 10.0.0.3 ping statistics --- 00:25:14.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:14.788 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:25:14.788 16:44:52 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:14.788 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:14.788 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:25:14.788 00:25:14.788 --- 10.0.0.1 ping statistics --- 00:25:14.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:14.788 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:25:14.788 16:44:52 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:14.788 16:44:52 -- nvmf/common.sh@421 -- # return 0 00:25:14.788 16:44:52 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:14.788 16:44:52 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:14.788 16:44:52 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:14.788 16:44:52 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:14.788 16:44:52 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:14.788 16:44:52 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:14.788 16:44:52 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:14.788 16:44:52 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:25:14.788 16:44:52 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:14.788 16:44:52 -- common/autotest_common.sh@10 -- # set +x 00:25:14.788 16:44:52 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:25:14.788 16:44:52 -- common/autotest_common.sh@1519 -- # bdfs=() 00:25:14.788 16:44:52 -- common/autotest_common.sh@1519 -- # local bdfs 00:25:14.788 16:44:52 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:25:14.788 16:44:52 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:25:14.788 16:44:52 -- common/autotest_common.sh@1508 -- # bdfs=() 00:25:14.788 16:44:52 -- common/autotest_common.sh@1508 -- # local bdfs 00:25:14.788 16:44:52 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:25:14.788 16:44:52 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:25:14.788 16:44:52 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:25:14.788 16:44:52 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:25:14.788 16:44:52 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:25:14.788 16:44:52 -- common/autotest_common.sh@1522 -- # echo 0000:00:06.0 00:25:14.788 16:44:52 -- target/identify_passthru.sh@16 -- # bdf=0000:00:06.0 00:25:14.788 16:44:52 -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:06.0 ']' 00:25:14.788 16:44:52 -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:25:14.788 16:44:52 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:25:14.788 16:44:52 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:25:15.047 16:44:52 -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:25:15.047 16:44:52 -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:25:15.047 16:44:52 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:25:15.047 16:44:52 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:25:15.306 16:44:52 -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:25:15.306 16:44:52 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:25:15.306 16:44:52 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:15.306 16:44:52 -- common/autotest_common.sh@10 -- # set +x 00:25:15.306 16:44:52 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:25:15.306 16:44:52 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:15.306 16:44:52 -- common/autotest_common.sh@10 -- # set +x 00:25:15.306 16:44:52 -- target/identify_passthru.sh@31 -- # nvmfpid=101922 00:25:15.306 16:44:52 -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:15.306 16:44:52 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:15.306 16:44:52 -- target/identify_passthru.sh@35 -- # waitforlisten 101922 00:25:15.306 16:44:52 -- common/autotest_common.sh@829 -- # '[' -z 101922 ']' 00:25:15.306 16:44:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:15.306 16:44:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:15.306 16:44:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:15.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:15.306 16:44:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:15.306 16:44:52 -- common/autotest_common.sh@10 -- # set +x 00:25:15.306 [2024-11-16 16:44:52.707781] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:15.306 [2024-11-16 16:44:52.707854] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:15.564 [2024-11-16 16:44:52.844498] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:15.564 [2024-11-16 16:44:52.935849] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:15.564 [2024-11-16 16:44:52.936044] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:15.564 [2024-11-16 16:44:52.936081] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:15.564 [2024-11-16 16:44:52.936095] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:15.564 [2024-11-16 16:44:52.936194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:15.564 [2024-11-16 16:44:52.936803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:15.564 [2024-11-16 16:44:52.936987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:15.564 [2024-11-16 16:44:52.936999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:16.501 16:44:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:16.502 16:44:53 -- common/autotest_common.sh@862 -- # return 0 00:25:16.502 16:44:53 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:25:16.502 16:44:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.502 16:44:53 -- common/autotest_common.sh@10 -- # set +x 00:25:16.502 16:44:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.502 16:44:53 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:25:16.502 16:44:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.502 16:44:53 -- common/autotest_common.sh@10 -- # set +x 00:25:16.502 [2024-11-16 16:44:53.856707] nvmf_tgt.c: 423:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:25:16.502 16:44:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.502 16:44:53 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:16.502 16:44:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.502 16:44:53 -- common/autotest_common.sh@10 -- # set +x 00:25:16.502 [2024-11-16 16:44:53.871418] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:16.502 16:44:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.502 16:44:53 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:25:16.502 16:44:53 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:16.502 16:44:53 -- common/autotest_common.sh@10 -- # set +x 00:25:16.502 16:44:53 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:25:16.502 16:44:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.502 16:44:53 -- common/autotest_common.sh@10 -- # set +x 00:25:16.502 Nvme0n1 00:25:16.502 16:44:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.502 16:44:53 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:25:16.502 16:44:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.502 16:44:53 -- common/autotest_common.sh@10 -- # set +x 00:25:16.761 16:44:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.761 16:44:53 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:16.761 16:44:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.761 16:44:53 -- common/autotest_common.sh@10 -- # set +x 00:25:16.761 16:44:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.761 16:44:54 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:16.761 16:44:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.761 16:44:54 -- common/autotest_common.sh@10 -- # set +x 00:25:16.761 [2024-11-16 16:44:54.009888] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:16.761 16:44:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.761 16:44:54 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:25:16.761 16:44:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.761 16:44:54 -- common/autotest_common.sh@10 -- # set +x 00:25:16.761 [2024-11-16 16:44:54.017603] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:16.761 [ 00:25:16.761 { 00:25:16.761 "allow_any_host": true, 00:25:16.761 "hosts": [], 00:25:16.761 "listen_addresses": [], 00:25:16.761 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:16.761 "subtype": "Discovery" 00:25:16.761 }, 00:25:16.761 { 00:25:16.761 "allow_any_host": true, 00:25:16.761 "hosts": [], 00:25:16.761 "listen_addresses": [ 00:25:16.761 { 00:25:16.761 "adrfam": "IPv4", 00:25:16.761 "traddr": "10.0.0.2", 00:25:16.761 "transport": "TCP", 00:25:16.761 "trsvcid": "4420", 00:25:16.761 "trtype": "TCP" 00:25:16.761 } 00:25:16.761 ], 00:25:16.761 "max_cntlid": 65519, 00:25:16.761 "max_namespaces": 1, 00:25:16.761 "min_cntlid": 1, 00:25:16.761 "model_number": "SPDK bdev Controller", 00:25:16.761 "namespaces": [ 00:25:16.761 { 00:25:16.761 "bdev_name": "Nvme0n1", 00:25:16.761 "name": "Nvme0n1", 00:25:16.761 "nguid": "E0BC3136029F4D99B58C2373CFC29EB0", 00:25:16.761 "nsid": 1, 00:25:16.761 "uuid": "e0bc3136-029f-4d99-b58c-2373cfc29eb0" 00:25:16.761 } 00:25:16.761 ], 00:25:16.761 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:16.761 "serial_number": "SPDK00000000000001", 00:25:16.761 "subtype": "NVMe" 00:25:16.761 } 00:25:16.761 ] 00:25:16.761 16:44:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.761 16:44:54 -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:16.761 16:44:54 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:25:16.761 16:44:54 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:25:16.761 16:44:54 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:25:16.762 16:44:54 -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:16.762 16:44:54 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:25:16.762 16:44:54 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:25:17.020 16:44:54 -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:25:17.020 16:44:54 -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:25:17.020 16:44:54 -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:25:17.020 16:44:54 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:17.020 16:44:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.020 16:44:54 -- common/autotest_common.sh@10 -- # set +x 00:25:17.020 16:44:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.020 16:44:54 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:25:17.020 16:44:54 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:25:17.020 16:44:54 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:17.020 16:44:54 -- nvmf/common.sh@116 -- # sync 00:25:17.279 16:44:54 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:17.279 16:44:54 -- nvmf/common.sh@119 -- # set +e 00:25:17.279 16:44:54 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:17.279 16:44:54 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:17.279 rmmod nvme_tcp 00:25:17.279 rmmod nvme_fabrics 00:25:17.279 rmmod nvme_keyring 00:25:17.279 16:44:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:17.279 16:44:54 -- nvmf/common.sh@123 -- # set -e 00:25:17.279 16:44:54 -- nvmf/common.sh@124 -- # return 0 00:25:17.279 16:44:54 -- nvmf/common.sh@477 -- # '[' -n 101922 ']' 00:25:17.279 16:44:54 -- nvmf/common.sh@478 -- # killprocess 101922 00:25:17.279 16:44:54 -- common/autotest_common.sh@936 -- # '[' -z 101922 ']' 00:25:17.279 16:44:54 -- common/autotest_common.sh@940 -- # kill -0 101922 00:25:17.279 16:44:54 -- common/autotest_common.sh@941 -- # uname 00:25:17.279 16:44:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:17.279 16:44:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 101922 00:25:17.279 killing process with pid 101922 00:25:17.279 16:44:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:17.279 16:44:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:17.279 16:44:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 101922' 00:25:17.279 16:44:54 -- common/autotest_common.sh@955 -- # kill 101922 00:25:17.279 [2024-11-16 16:44:54.612809] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:17.279 16:44:54 -- common/autotest_common.sh@960 -- # wait 101922 00:25:17.538 16:44:54 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:17.538 16:44:54 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:17.538 16:44:54 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:17.538 16:44:54 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:17.538 16:44:54 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:17.538 16:44:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:17.538 16:44:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:17.538 16:44:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:17.538 16:44:54 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:25:17.538 00:25:17.538 real 0m3.282s 00:25:17.538 user 0m8.117s 00:25:17.538 sys 0m0.898s 00:25:17.538 16:44:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:17.538 16:44:54 -- common/autotest_common.sh@10 -- # set +x 00:25:17.538 ************************************ 00:25:17.538 END TEST nvmf_identify_passthru 00:25:17.538 ************************************ 00:25:17.538 16:44:54 -- spdk/autotest.sh@287 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:25:17.538 16:44:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:17.538 16:44:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:17.538 16:44:54 -- common/autotest_common.sh@10 -- # set +x 00:25:17.538 ************************************ 00:25:17.538 START TEST nvmf_dif 00:25:17.538 ************************************ 00:25:17.538 16:44:54 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:25:17.538 * Looking for test storage... 00:25:17.538 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:17.538 16:44:55 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:25:17.538 16:44:55 -- common/autotest_common.sh@1690 -- # lcov --version 00:25:17.538 16:44:55 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:25:17.797 16:44:55 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:25:17.797 16:44:55 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:25:17.797 16:44:55 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:17.797 16:44:55 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:17.797 16:44:55 -- scripts/common.sh@335 -- # IFS=.-: 00:25:17.797 16:44:55 -- scripts/common.sh@335 -- # read -ra ver1 00:25:17.797 16:44:55 -- scripts/common.sh@336 -- # IFS=.-: 00:25:17.797 16:44:55 -- scripts/common.sh@336 -- # read -ra ver2 00:25:17.797 16:44:55 -- scripts/common.sh@337 -- # local 'op=<' 00:25:17.797 16:44:55 -- scripts/common.sh@339 -- # ver1_l=2 00:25:17.797 16:44:55 -- scripts/common.sh@340 -- # ver2_l=1 00:25:17.797 16:44:55 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:17.797 16:44:55 -- scripts/common.sh@343 -- # case "$op" in 00:25:17.797 16:44:55 -- scripts/common.sh@344 -- # : 1 00:25:17.797 16:44:55 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:17.797 16:44:55 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:17.797 16:44:55 -- scripts/common.sh@364 -- # decimal 1 00:25:17.797 16:44:55 -- scripts/common.sh@352 -- # local d=1 00:25:17.797 16:44:55 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:17.797 16:44:55 -- scripts/common.sh@354 -- # echo 1 00:25:17.797 16:44:55 -- scripts/common.sh@364 -- # ver1[v]=1 00:25:17.797 16:44:55 -- scripts/common.sh@365 -- # decimal 2 00:25:17.797 16:44:55 -- scripts/common.sh@352 -- # local d=2 00:25:17.797 16:44:55 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:17.797 16:44:55 -- scripts/common.sh@354 -- # echo 2 00:25:17.797 16:44:55 -- scripts/common.sh@365 -- # ver2[v]=2 00:25:17.797 16:44:55 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:17.797 16:44:55 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:17.797 16:44:55 -- scripts/common.sh@367 -- # return 0 00:25:17.797 16:44:55 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:17.797 16:44:55 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:25:17.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:17.797 --rc genhtml_branch_coverage=1 00:25:17.797 --rc genhtml_function_coverage=1 00:25:17.797 --rc genhtml_legend=1 00:25:17.797 --rc geninfo_all_blocks=1 00:25:17.797 --rc geninfo_unexecuted_blocks=1 00:25:17.797 00:25:17.797 ' 00:25:17.797 16:44:55 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:25:17.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:17.797 --rc genhtml_branch_coverage=1 00:25:17.797 --rc genhtml_function_coverage=1 00:25:17.797 --rc genhtml_legend=1 00:25:17.797 --rc geninfo_all_blocks=1 00:25:17.797 --rc geninfo_unexecuted_blocks=1 00:25:17.797 00:25:17.797 ' 00:25:17.797 16:44:55 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:25:17.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:17.797 --rc genhtml_branch_coverage=1 00:25:17.797 --rc genhtml_function_coverage=1 00:25:17.797 --rc genhtml_legend=1 00:25:17.797 --rc geninfo_all_blocks=1 00:25:17.797 --rc geninfo_unexecuted_blocks=1 00:25:17.797 00:25:17.797 ' 00:25:17.797 16:44:55 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:25:17.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:17.797 --rc genhtml_branch_coverage=1 00:25:17.797 --rc genhtml_function_coverage=1 00:25:17.797 --rc genhtml_legend=1 00:25:17.797 --rc geninfo_all_blocks=1 00:25:17.797 --rc geninfo_unexecuted_blocks=1 00:25:17.797 00:25:17.797 ' 00:25:17.797 16:44:55 -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:17.797 16:44:55 -- nvmf/common.sh@7 -- # uname -s 00:25:17.797 16:44:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:17.797 16:44:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:17.797 16:44:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:17.797 16:44:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:17.797 16:44:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:17.797 16:44:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:17.797 16:44:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:17.797 16:44:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:17.797 16:44:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:17.797 16:44:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:17.797 16:44:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:25:17.797 16:44:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:25:17.797 16:44:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:17.797 16:44:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:17.797 16:44:55 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:17.797 16:44:55 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:17.797 16:44:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:17.797 16:44:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:17.797 16:44:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:17.797 16:44:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:17.798 16:44:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:17.798 16:44:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:17.798 16:44:55 -- paths/export.sh@5 -- # export PATH 00:25:17.798 16:44:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:17.798 16:44:55 -- nvmf/common.sh@46 -- # : 0 00:25:17.798 16:44:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:17.798 16:44:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:17.798 16:44:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:17.798 16:44:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:17.798 16:44:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:17.798 16:44:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:17.798 16:44:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:17.798 16:44:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:17.798 16:44:55 -- target/dif.sh@15 -- # NULL_META=16 00:25:17.798 16:44:55 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:25:17.798 16:44:55 -- target/dif.sh@15 -- # NULL_SIZE=64 00:25:17.798 16:44:55 -- target/dif.sh@15 -- # NULL_DIF=1 00:25:17.798 16:44:55 -- target/dif.sh@135 -- # nvmftestinit 00:25:17.798 16:44:55 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:17.798 16:44:55 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:17.798 16:44:55 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:17.798 16:44:55 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:17.798 16:44:55 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:17.798 16:44:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:17.798 16:44:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:17.798 16:44:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:17.798 16:44:55 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:25:17.798 16:44:55 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:25:17.798 16:44:55 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:25:17.798 16:44:55 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:25:17.798 16:44:55 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:25:17.798 16:44:55 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:25:17.798 16:44:55 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:17.798 16:44:55 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:17.798 16:44:55 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:17.798 16:44:55 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:25:17.798 16:44:55 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:17.798 16:44:55 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:17.798 16:44:55 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:17.798 16:44:55 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:17.798 16:44:55 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:17.798 16:44:55 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:17.798 16:44:55 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:17.798 16:44:55 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:17.798 16:44:55 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:25:17.798 16:44:55 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:25:17.798 Cannot find device "nvmf_tgt_br" 00:25:17.798 16:44:55 -- nvmf/common.sh@154 -- # true 00:25:17.798 16:44:55 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:25:17.798 Cannot find device "nvmf_tgt_br2" 00:25:17.798 16:44:55 -- nvmf/common.sh@155 -- # true 00:25:17.798 16:44:55 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:25:17.798 16:44:55 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:25:17.798 Cannot find device "nvmf_tgt_br" 00:25:17.798 16:44:55 -- nvmf/common.sh@157 -- # true 00:25:17.798 16:44:55 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:25:17.798 Cannot find device "nvmf_tgt_br2" 00:25:17.798 16:44:55 -- nvmf/common.sh@158 -- # true 00:25:17.798 16:44:55 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:25:17.798 16:44:55 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:25:17.798 16:44:55 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:17.798 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:17.798 16:44:55 -- nvmf/common.sh@161 -- # true 00:25:17.798 16:44:55 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:17.798 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:17.798 16:44:55 -- nvmf/common.sh@162 -- # true 00:25:17.798 16:44:55 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:25:17.798 16:44:55 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:17.798 16:44:55 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:18.057 16:44:55 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:18.057 16:44:55 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:18.057 16:44:55 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:18.057 16:44:55 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:18.057 16:44:55 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:18.057 16:44:55 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:18.057 16:44:55 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:25:18.057 16:44:55 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:25:18.057 16:44:55 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:25:18.057 16:44:55 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:25:18.057 16:44:55 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:18.057 16:44:55 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:18.057 16:44:55 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:18.057 16:44:55 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:25:18.057 16:44:55 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:25:18.057 16:44:55 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:25:18.057 16:44:55 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:18.057 16:44:55 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:18.057 16:44:55 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:18.057 16:44:55 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:18.057 16:44:55 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:25:18.057 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:18.057 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.112 ms 00:25:18.057 00:25:18.057 --- 10.0.0.2 ping statistics --- 00:25:18.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:18.057 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:25:18.057 16:44:55 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:25:18.057 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:18.057 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.036 ms 00:25:18.057 00:25:18.057 --- 10.0.0.3 ping statistics --- 00:25:18.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:18.057 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:25:18.057 16:44:55 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:18.057 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:18.057 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:25:18.057 00:25:18.057 --- 10.0.0.1 ping statistics --- 00:25:18.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:18.057 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:25:18.057 16:44:55 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:18.057 16:44:55 -- nvmf/common.sh@421 -- # return 0 00:25:18.057 16:44:55 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:25:18.057 16:44:55 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:18.316 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:18.575 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:18.575 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:18.575 16:44:55 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:18.575 16:44:55 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:18.575 16:44:55 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:18.575 16:44:55 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:18.575 16:44:55 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:18.575 16:44:55 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:18.575 16:44:55 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:25:18.575 16:44:55 -- target/dif.sh@137 -- # nvmfappstart 00:25:18.575 16:44:55 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:18.575 16:44:55 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:18.575 16:44:55 -- common/autotest_common.sh@10 -- # set +x 00:25:18.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:18.575 16:44:55 -- nvmf/common.sh@469 -- # nvmfpid=102285 00:25:18.575 16:44:55 -- nvmf/common.sh@470 -- # waitforlisten 102285 00:25:18.575 16:44:55 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:18.575 16:44:55 -- common/autotest_common.sh@829 -- # '[' -z 102285 ']' 00:25:18.575 16:44:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:18.575 16:44:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:18.575 16:44:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:18.575 16:44:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:18.575 16:44:55 -- common/autotest_common.sh@10 -- # set +x 00:25:18.575 [2024-11-16 16:44:55.957259] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:18.575 [2024-11-16 16:44:55.957670] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:18.834 [2024-11-16 16:44:56.096309] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:18.834 [2024-11-16 16:44:56.182339] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:18.834 [2024-11-16 16:44:56.182902] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:18.834 [2024-11-16 16:44:56.183092] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:18.834 [2024-11-16 16:44:56.183342] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:18.834 [2024-11-16 16:44:56.183613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:19.770 16:44:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:19.770 16:44:56 -- common/autotest_common.sh@862 -- # return 0 00:25:19.770 16:44:56 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:19.770 16:44:56 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:19.770 16:44:56 -- common/autotest_common.sh@10 -- # set +x 00:25:19.770 16:44:57 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:19.770 16:44:57 -- target/dif.sh@139 -- # create_transport 00:25:19.770 16:44:57 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:25:19.770 16:44:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.770 16:44:57 -- common/autotest_common.sh@10 -- # set +x 00:25:19.770 [2024-11-16 16:44:57.043019] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:19.770 16:44:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.770 16:44:57 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:25:19.770 16:44:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:19.770 16:44:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:19.771 16:44:57 -- common/autotest_common.sh@10 -- # set +x 00:25:19.771 ************************************ 00:25:19.771 START TEST fio_dif_1_default 00:25:19.771 ************************************ 00:25:19.771 16:44:57 -- common/autotest_common.sh@1114 -- # fio_dif_1 00:25:19.771 16:44:57 -- target/dif.sh@86 -- # create_subsystems 0 00:25:19.771 16:44:57 -- target/dif.sh@28 -- # local sub 00:25:19.771 16:44:57 -- target/dif.sh@30 -- # for sub in "$@" 00:25:19.771 16:44:57 -- target/dif.sh@31 -- # create_subsystem 0 00:25:19.771 16:44:57 -- target/dif.sh@18 -- # local sub_id=0 00:25:19.771 16:44:57 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:19.771 16:44:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.771 16:44:57 -- common/autotest_common.sh@10 -- # set +x 00:25:19.771 bdev_null0 00:25:19.771 16:44:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.771 16:44:57 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:19.771 16:44:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.771 16:44:57 -- common/autotest_common.sh@10 -- # set +x 00:25:19.771 16:44:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.771 16:44:57 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:19.771 16:44:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.771 16:44:57 -- common/autotest_common.sh@10 -- # set +x 00:25:19.771 16:44:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.771 16:44:57 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:19.771 16:44:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.771 16:44:57 -- common/autotest_common.sh@10 -- # set +x 00:25:19.771 [2024-11-16 16:44:57.091196] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:19.771 16:44:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.771 16:44:57 -- target/dif.sh@87 -- # fio /dev/fd/62 00:25:19.771 16:44:57 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:25:19.771 16:44:57 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:19.771 16:44:57 -- nvmf/common.sh@520 -- # config=() 00:25:19.771 16:44:57 -- nvmf/common.sh@520 -- # local subsystem config 00:25:19.771 16:44:57 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:19.771 16:44:57 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:19.771 16:44:57 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:19.771 { 00:25:19.771 "params": { 00:25:19.771 "name": "Nvme$subsystem", 00:25:19.771 "trtype": "$TEST_TRANSPORT", 00:25:19.771 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:19.771 "adrfam": "ipv4", 00:25:19.771 "trsvcid": "$NVMF_PORT", 00:25:19.771 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:19.771 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:19.771 "hdgst": ${hdgst:-false}, 00:25:19.771 "ddgst": ${ddgst:-false} 00:25:19.771 }, 00:25:19.771 "method": "bdev_nvme_attach_controller" 00:25:19.771 } 00:25:19.771 EOF 00:25:19.771 )") 00:25:19.771 16:44:57 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:19.771 16:44:57 -- target/dif.sh@82 -- # gen_fio_conf 00:25:19.771 16:44:57 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:25:19.771 16:44:57 -- target/dif.sh@54 -- # local file 00:25:19.771 16:44:57 -- target/dif.sh@56 -- # cat 00:25:19.771 16:44:57 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:19.771 16:44:57 -- common/autotest_common.sh@1328 -- # local sanitizers 00:25:19.771 16:44:57 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:19.771 16:44:57 -- common/autotest_common.sh@1330 -- # shift 00:25:19.771 16:44:57 -- nvmf/common.sh@542 -- # cat 00:25:19.771 16:44:57 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:25:19.771 16:44:57 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:19.771 16:44:57 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:19.771 16:44:57 -- target/dif.sh@72 -- # (( file <= files )) 00:25:19.771 16:44:57 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:19.771 16:44:57 -- common/autotest_common.sh@1334 -- # grep libasan 00:25:19.771 16:44:57 -- nvmf/common.sh@544 -- # jq . 00:25:19.771 16:44:57 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:19.771 16:44:57 -- nvmf/common.sh@545 -- # IFS=, 00:25:19.771 16:44:57 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:19.771 "params": { 00:25:19.771 "name": "Nvme0", 00:25:19.771 "trtype": "tcp", 00:25:19.771 "traddr": "10.0.0.2", 00:25:19.771 "adrfam": "ipv4", 00:25:19.771 "trsvcid": "4420", 00:25:19.771 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:19.771 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:19.771 "hdgst": false, 00:25:19.771 "ddgst": false 00:25:19.771 }, 00:25:19.771 "method": "bdev_nvme_attach_controller" 00:25:19.771 }' 00:25:19.771 16:44:57 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:19.771 16:44:57 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:19.771 16:44:57 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:19.771 16:44:57 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:19.771 16:44:57 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:25:19.771 16:44:57 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:19.771 16:44:57 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:19.771 16:44:57 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:19.771 16:44:57 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:19.771 16:44:57 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:20.030 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:20.030 fio-3.35 00:25:20.030 Starting 1 thread 00:25:20.288 [2024-11-16 16:44:57.732175] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:25:20.288 [2024-11-16 16:44:57.732556] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:25:32.492 00:25:32.492 filename0: (groupid=0, jobs=1): err= 0: pid=102365: Sat Nov 16 16:45:07 2024 00:25:32.492 read: IOPS=2526, BW=9.87MiB/s (10.3MB/s)(98.7MiB/10004msec) 00:25:32.492 slat (nsec): min=5732, max=62447, avg=7019.23, stdev=2481.69 00:25:32.492 clat (usec): min=335, max=41454, avg=1562.47, stdev=6824.35 00:25:32.492 lat (usec): min=341, max=41462, avg=1569.49, stdev=6824.50 00:25:32.492 clat percentiles (usec): 00:25:32.492 | 1.00th=[ 343], 5.00th=[ 351], 10.00th=[ 355], 20.00th=[ 363], 00:25:32.492 | 30.00th=[ 367], 40.00th=[ 371], 50.00th=[ 375], 60.00th=[ 379], 00:25:32.492 | 70.00th=[ 388], 80.00th=[ 400], 90.00th=[ 420], 95.00th=[ 453], 00:25:32.492 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:25:32.492 | 99.99th=[41681] 00:25:32.492 bw ( KiB/s): min= 2976, max=26368, per=94.72%, avg=9571.89, stdev=5829.89, samples=19 00:25:32.492 iops : min= 744, max= 6592, avg=2392.95, stdev=1457.47, samples=19 00:25:32.492 lat (usec) : 500=96.71%, 750=0.35% 00:25:32.492 lat (msec) : 2=0.02%, 4=0.02%, 50=2.91% 00:25:32.492 cpu : usr=91.13%, sys=7.72%, ctx=29, majf=0, minf=0 00:25:32.492 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:32.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:32.492 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:32.492 issued rwts: total=25272,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:32.492 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:32.492 00:25:32.492 Run status group 0 (all jobs): 00:25:32.492 READ: bw=9.87MiB/s (10.3MB/s), 9.87MiB/s-9.87MiB/s (10.3MB/s-10.3MB/s), io=98.7MiB (104MB), run=10004-10004msec 00:25:32.492 16:45:08 -- target/dif.sh@88 -- # destroy_subsystems 0 00:25:32.492 16:45:08 -- target/dif.sh@43 -- # local sub 00:25:32.492 16:45:08 -- target/dif.sh@45 -- # for sub in "$@" 00:25:32.492 16:45:08 -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:32.492 16:45:08 -- target/dif.sh@36 -- # local sub_id=0 00:25:32.492 16:45:08 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:32.492 16:45:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.492 16:45:08 -- common/autotest_common.sh@10 -- # set +x 00:25:32.492 16:45:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.492 16:45:08 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:32.492 16:45:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.492 16:45:08 -- common/autotest_common.sh@10 -- # set +x 00:25:32.492 ************************************ 00:25:32.492 END TEST fio_dif_1_default 00:25:32.492 ************************************ 00:25:32.492 16:45:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.492 00:25:32.492 real 0m11.018s 00:25:32.492 user 0m9.730s 00:25:32.492 sys 0m1.086s 00:25:32.492 16:45:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:32.492 16:45:08 -- common/autotest_common.sh@10 -- # set +x 00:25:32.492 16:45:08 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:25:32.492 16:45:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:32.492 16:45:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:32.492 16:45:08 -- common/autotest_common.sh@10 -- # set +x 00:25:32.492 ************************************ 00:25:32.492 START TEST fio_dif_1_multi_subsystems 00:25:32.492 ************************************ 00:25:32.492 16:45:08 -- common/autotest_common.sh@1114 -- # fio_dif_1_multi_subsystems 00:25:32.492 16:45:08 -- target/dif.sh@92 -- # local files=1 00:25:32.492 16:45:08 -- target/dif.sh@94 -- # create_subsystems 0 1 00:25:32.492 16:45:08 -- target/dif.sh@28 -- # local sub 00:25:32.492 16:45:08 -- target/dif.sh@30 -- # for sub in "$@" 00:25:32.492 16:45:08 -- target/dif.sh@31 -- # create_subsystem 0 00:25:32.492 16:45:08 -- target/dif.sh@18 -- # local sub_id=0 00:25:32.492 16:45:08 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:32.492 16:45:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.492 16:45:08 -- common/autotest_common.sh@10 -- # set +x 00:25:32.492 bdev_null0 00:25:32.492 16:45:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.492 16:45:08 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:32.492 16:45:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.492 16:45:08 -- common/autotest_common.sh@10 -- # set +x 00:25:32.492 16:45:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.492 16:45:08 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:32.492 16:45:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.492 16:45:08 -- common/autotest_common.sh@10 -- # set +x 00:25:32.492 16:45:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.492 16:45:08 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:32.492 16:45:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.492 16:45:08 -- common/autotest_common.sh@10 -- # set +x 00:25:32.492 [2024-11-16 16:45:08.158401] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:32.492 16:45:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.492 16:45:08 -- target/dif.sh@30 -- # for sub in "$@" 00:25:32.492 16:45:08 -- target/dif.sh@31 -- # create_subsystem 1 00:25:32.492 16:45:08 -- target/dif.sh@18 -- # local sub_id=1 00:25:32.492 16:45:08 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:25:32.492 16:45:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.492 16:45:08 -- common/autotest_common.sh@10 -- # set +x 00:25:32.492 bdev_null1 00:25:32.492 16:45:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.492 16:45:08 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:32.492 16:45:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.492 16:45:08 -- common/autotest_common.sh@10 -- # set +x 00:25:32.492 16:45:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.492 16:45:08 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:32.492 16:45:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.492 16:45:08 -- common/autotest_common.sh@10 -- # set +x 00:25:32.492 16:45:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.492 16:45:08 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:32.492 16:45:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.492 16:45:08 -- common/autotest_common.sh@10 -- # set +x 00:25:32.492 16:45:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.492 16:45:08 -- target/dif.sh@95 -- # fio /dev/fd/62 00:25:32.492 16:45:08 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:25:32.492 16:45:08 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:25:32.492 16:45:08 -- nvmf/common.sh@520 -- # config=() 00:25:32.492 16:45:08 -- nvmf/common.sh@520 -- # local subsystem config 00:25:32.492 16:45:08 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:32.492 16:45:08 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:32.492 { 00:25:32.492 "params": { 00:25:32.492 "name": "Nvme$subsystem", 00:25:32.492 "trtype": "$TEST_TRANSPORT", 00:25:32.492 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:32.492 "adrfam": "ipv4", 00:25:32.492 "trsvcid": "$NVMF_PORT", 00:25:32.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:32.492 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:32.492 "hdgst": ${hdgst:-false}, 00:25:32.492 "ddgst": ${ddgst:-false} 00:25:32.492 }, 00:25:32.492 "method": "bdev_nvme_attach_controller" 00:25:32.492 } 00:25:32.492 EOF 00:25:32.492 )") 00:25:32.492 16:45:08 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:32.492 16:45:08 -- target/dif.sh@82 -- # gen_fio_conf 00:25:32.492 16:45:08 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:32.492 16:45:08 -- target/dif.sh@54 -- # local file 00:25:32.492 16:45:08 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:25:32.492 16:45:08 -- target/dif.sh@56 -- # cat 00:25:32.492 16:45:08 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:32.492 16:45:08 -- common/autotest_common.sh@1328 -- # local sanitizers 00:25:32.492 16:45:08 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:32.492 16:45:08 -- nvmf/common.sh@542 -- # cat 00:25:32.492 16:45:08 -- common/autotest_common.sh@1330 -- # shift 00:25:32.492 16:45:08 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:25:32.492 16:45:08 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:32.492 16:45:08 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:32.492 16:45:08 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:32.492 16:45:08 -- target/dif.sh@72 -- # (( file <= files )) 00:25:32.492 16:45:08 -- common/autotest_common.sh@1334 -- # grep libasan 00:25:32.492 16:45:08 -- target/dif.sh@73 -- # cat 00:25:32.492 16:45:08 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:32.492 16:45:08 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:32.492 16:45:08 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:32.492 { 00:25:32.492 "params": { 00:25:32.492 "name": "Nvme$subsystem", 00:25:32.492 "trtype": "$TEST_TRANSPORT", 00:25:32.492 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:32.492 "adrfam": "ipv4", 00:25:32.492 "trsvcid": "$NVMF_PORT", 00:25:32.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:32.492 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:32.492 "hdgst": ${hdgst:-false}, 00:25:32.492 "ddgst": ${ddgst:-false} 00:25:32.492 }, 00:25:32.492 "method": "bdev_nvme_attach_controller" 00:25:32.492 } 00:25:32.492 EOF 00:25:32.492 )") 00:25:32.493 16:45:08 -- nvmf/common.sh@542 -- # cat 00:25:32.493 16:45:08 -- target/dif.sh@72 -- # (( file++ )) 00:25:32.493 16:45:08 -- target/dif.sh@72 -- # (( file <= files )) 00:25:32.493 16:45:08 -- nvmf/common.sh@544 -- # jq . 00:25:32.493 16:45:08 -- nvmf/common.sh@545 -- # IFS=, 00:25:32.493 16:45:08 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:32.493 "params": { 00:25:32.493 "name": "Nvme0", 00:25:32.493 "trtype": "tcp", 00:25:32.493 "traddr": "10.0.0.2", 00:25:32.493 "adrfam": "ipv4", 00:25:32.493 "trsvcid": "4420", 00:25:32.493 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:32.493 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:32.493 "hdgst": false, 00:25:32.493 "ddgst": false 00:25:32.493 }, 00:25:32.493 "method": "bdev_nvme_attach_controller" 00:25:32.493 },{ 00:25:32.493 "params": { 00:25:32.493 "name": "Nvme1", 00:25:32.493 "trtype": "tcp", 00:25:32.493 "traddr": "10.0.0.2", 00:25:32.493 "adrfam": "ipv4", 00:25:32.493 "trsvcid": "4420", 00:25:32.493 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:32.493 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:32.493 "hdgst": false, 00:25:32.493 "ddgst": false 00:25:32.493 }, 00:25:32.493 "method": "bdev_nvme_attach_controller" 00:25:32.493 }' 00:25:32.493 16:45:08 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:32.493 16:45:08 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:32.493 16:45:08 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:32.493 16:45:08 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:32.493 16:45:08 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:25:32.493 16:45:08 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:32.493 16:45:08 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:32.493 16:45:08 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:32.493 16:45:08 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:32.493 16:45:08 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:32.493 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:32.493 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:32.493 fio-3.35 00:25:32.493 Starting 2 threads 00:25:32.493 [2024-11-16 16:45:08.928783] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:25:32.493 [2024-11-16 16:45:08.928854] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:25:42.514 00:25:42.514 filename0: (groupid=0, jobs=1): err= 0: pid=102528: Sat Nov 16 16:45:19 2024 00:25:42.514 read: IOPS=477, BW=1908KiB/s (1954kB/s)(18.7MiB/10036msec) 00:25:42.514 slat (usec): min=5, max=192, avg= 7.93, stdev= 5.53 00:25:42.514 clat (usec): min=354, max=41513, avg=8359.60, stdev=16096.29 00:25:42.514 lat (usec): min=360, max=41523, avg=8367.53, stdev=16096.32 00:25:42.514 clat percentiles (usec): 00:25:42.514 | 1.00th=[ 359], 5.00th=[ 367], 10.00th=[ 367], 20.00th=[ 375], 00:25:42.514 | 30.00th=[ 383], 40.00th=[ 388], 50.00th=[ 396], 60.00th=[ 408], 00:25:42.514 | 70.00th=[ 424], 80.00th=[ 742], 90.00th=[41157], 95.00th=[41157], 00:25:42.514 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:25:42.514 | 99.99th=[41681] 00:25:42.514 bw ( KiB/s): min= 1056, max= 2912, per=52.31%, avg=1913.40, stdev=491.56, samples=20 00:25:42.514 iops : min= 264, max= 728, avg=478.35, stdev=122.89, samples=20 00:25:42.514 lat (usec) : 500=76.57%, 750=3.45%, 1000=0.19% 00:25:42.514 lat (msec) : 2=0.17%, 50=19.63% 00:25:42.514 cpu : usr=94.98%, sys=4.21%, ctx=122, majf=0, minf=9 00:25:42.514 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:42.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.514 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.514 issued rwts: total=4788,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:42.514 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:42.514 filename1: (groupid=0, jobs=1): err= 0: pid=102529: Sat Nov 16 16:45:19 2024 00:25:42.514 read: IOPS=437, BW=1750KiB/s (1792kB/s)(17.1MiB/10028msec) 00:25:42.514 slat (nsec): min=5792, max=39063, avg=7830.18, stdev=3643.14 00:25:42.514 clat (usec): min=351, max=42465, avg=9117.30, stdev=16646.87 00:25:42.514 lat (usec): min=357, max=42473, avg=9125.13, stdev=16647.05 00:25:42.514 clat percentiles (usec): 00:25:42.514 | 1.00th=[ 359], 5.00th=[ 363], 10.00th=[ 367], 20.00th=[ 375], 00:25:42.514 | 30.00th=[ 379], 40.00th=[ 388], 50.00th=[ 396], 60.00th=[ 404], 00:25:42.514 | 70.00th=[ 433], 80.00th=[40633], 90.00th=[41157], 95.00th=[41157], 00:25:42.514 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[42206], 00:25:42.514 | 99.99th=[42206] 00:25:42.514 bw ( KiB/s): min= 992, max= 2720, per=47.93%, avg=1753.40, stdev=376.02, samples=20 00:25:42.514 iops : min= 248, max= 680, avg=438.35, stdev=94.00, samples=20 00:25:42.514 lat (usec) : 500=74.93%, 750=3.21%, 1000=0.16% 00:25:42.514 lat (msec) : 2=0.18%, 50=21.51% 00:25:42.514 cpu : usr=95.70%, sys=3.82%, ctx=12, majf=0, minf=0 00:25:42.514 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:42.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.514 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.514 issued rwts: total=4388,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:42.514 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:42.514 00:25:42.514 Run status group 0 (all jobs): 00:25:42.514 READ: bw=3657KiB/s (3745kB/s), 1750KiB/s-1908KiB/s (1792kB/s-1954kB/s), io=35.8MiB (37.6MB), run=10028-10036msec 00:25:42.514 16:45:19 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:25:42.514 16:45:19 -- target/dif.sh@43 -- # local sub 00:25:42.514 16:45:19 -- target/dif.sh@45 -- # for sub in "$@" 00:25:42.514 16:45:19 -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:42.514 16:45:19 -- target/dif.sh@36 -- # local sub_id=0 00:25:42.514 16:45:19 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:42.514 16:45:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.514 16:45:19 -- common/autotest_common.sh@10 -- # set +x 00:25:42.514 16:45:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.514 16:45:19 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:42.514 16:45:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.514 16:45:19 -- common/autotest_common.sh@10 -- # set +x 00:25:42.514 16:45:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.514 16:45:19 -- target/dif.sh@45 -- # for sub in "$@" 00:25:42.514 16:45:19 -- target/dif.sh@46 -- # destroy_subsystem 1 00:25:42.514 16:45:19 -- target/dif.sh@36 -- # local sub_id=1 00:25:42.514 16:45:19 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:42.514 16:45:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.514 16:45:19 -- common/autotest_common.sh@10 -- # set +x 00:25:42.514 16:45:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.514 16:45:19 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:25:42.514 16:45:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.514 16:45:19 -- common/autotest_common.sh@10 -- # set +x 00:25:42.514 ************************************ 00:25:42.514 END TEST fio_dif_1_multi_subsystems 00:25:42.514 ************************************ 00:25:42.514 16:45:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.514 00:25:42.514 real 0m11.194s 00:25:42.514 user 0m19.908s 00:25:42.514 sys 0m1.089s 00:25:42.514 16:45:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:42.514 16:45:19 -- common/autotest_common.sh@10 -- # set +x 00:25:42.514 16:45:19 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:25:42.514 16:45:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:42.514 16:45:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:42.514 16:45:19 -- common/autotest_common.sh@10 -- # set +x 00:25:42.514 ************************************ 00:25:42.514 START TEST fio_dif_rand_params 00:25:42.514 ************************************ 00:25:42.514 16:45:19 -- common/autotest_common.sh@1114 -- # fio_dif_rand_params 00:25:42.514 16:45:19 -- target/dif.sh@100 -- # local NULL_DIF 00:25:42.514 16:45:19 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:25:42.514 16:45:19 -- target/dif.sh@103 -- # NULL_DIF=3 00:25:42.514 16:45:19 -- target/dif.sh@103 -- # bs=128k 00:25:42.514 16:45:19 -- target/dif.sh@103 -- # numjobs=3 00:25:42.514 16:45:19 -- target/dif.sh@103 -- # iodepth=3 00:25:42.514 16:45:19 -- target/dif.sh@103 -- # runtime=5 00:25:42.514 16:45:19 -- target/dif.sh@105 -- # create_subsystems 0 00:25:42.514 16:45:19 -- target/dif.sh@28 -- # local sub 00:25:42.514 16:45:19 -- target/dif.sh@30 -- # for sub in "$@" 00:25:42.514 16:45:19 -- target/dif.sh@31 -- # create_subsystem 0 00:25:42.514 16:45:19 -- target/dif.sh@18 -- # local sub_id=0 00:25:42.514 16:45:19 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:25:42.514 16:45:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.514 16:45:19 -- common/autotest_common.sh@10 -- # set +x 00:25:42.514 bdev_null0 00:25:42.514 16:45:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.514 16:45:19 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:42.514 16:45:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.514 16:45:19 -- common/autotest_common.sh@10 -- # set +x 00:25:42.514 16:45:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.514 16:45:19 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:42.514 16:45:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.514 16:45:19 -- common/autotest_common.sh@10 -- # set +x 00:25:42.514 16:45:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.514 16:45:19 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:42.514 16:45:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.514 16:45:19 -- common/autotest_common.sh@10 -- # set +x 00:25:42.514 [2024-11-16 16:45:19.413336] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:42.514 16:45:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.514 16:45:19 -- target/dif.sh@106 -- # fio /dev/fd/62 00:25:42.515 16:45:19 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:25:42.515 16:45:19 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:42.515 16:45:19 -- nvmf/common.sh@520 -- # config=() 00:25:42.515 16:45:19 -- nvmf/common.sh@520 -- # local subsystem config 00:25:42.515 16:45:19 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:42.515 16:45:19 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:42.515 16:45:19 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:42.515 16:45:19 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:42.515 { 00:25:42.515 "params": { 00:25:42.515 "name": "Nvme$subsystem", 00:25:42.515 "trtype": "$TEST_TRANSPORT", 00:25:42.515 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:42.515 "adrfam": "ipv4", 00:25:42.515 "trsvcid": "$NVMF_PORT", 00:25:42.515 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:42.515 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:42.515 "hdgst": ${hdgst:-false}, 00:25:42.515 "ddgst": ${ddgst:-false} 00:25:42.515 }, 00:25:42.515 "method": "bdev_nvme_attach_controller" 00:25:42.515 } 00:25:42.515 EOF 00:25:42.515 )") 00:25:42.515 16:45:19 -- target/dif.sh@82 -- # gen_fio_conf 00:25:42.515 16:45:19 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:25:42.515 16:45:19 -- target/dif.sh@54 -- # local file 00:25:42.515 16:45:19 -- target/dif.sh@56 -- # cat 00:25:42.515 16:45:19 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:42.515 16:45:19 -- common/autotest_common.sh@1328 -- # local sanitizers 00:25:42.515 16:45:19 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:42.515 16:45:19 -- common/autotest_common.sh@1330 -- # shift 00:25:42.515 16:45:19 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:25:42.515 16:45:19 -- nvmf/common.sh@542 -- # cat 00:25:42.515 16:45:19 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:42.515 16:45:19 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:42.515 16:45:19 -- target/dif.sh@72 -- # (( file <= files )) 00:25:42.515 16:45:19 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:42.515 16:45:19 -- common/autotest_common.sh@1334 -- # grep libasan 00:25:42.515 16:45:19 -- nvmf/common.sh@544 -- # jq . 00:25:42.515 16:45:19 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:42.515 16:45:19 -- nvmf/common.sh@545 -- # IFS=, 00:25:42.515 16:45:19 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:42.515 "params": { 00:25:42.515 "name": "Nvme0", 00:25:42.515 "trtype": "tcp", 00:25:42.515 "traddr": "10.0.0.2", 00:25:42.515 "adrfam": "ipv4", 00:25:42.515 "trsvcid": "4420", 00:25:42.515 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:42.515 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:42.515 "hdgst": false, 00:25:42.515 "ddgst": false 00:25:42.515 }, 00:25:42.515 "method": "bdev_nvme_attach_controller" 00:25:42.515 }' 00:25:42.515 16:45:19 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:42.515 16:45:19 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:42.515 16:45:19 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:42.515 16:45:19 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:42.515 16:45:19 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:42.515 16:45:19 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:25:42.515 16:45:19 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:42.515 16:45:19 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:42.515 16:45:19 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:42.515 16:45:19 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:42.515 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:25:42.515 ... 00:25:42.515 fio-3.35 00:25:42.515 Starting 3 threads 00:25:42.774 [2024-11-16 16:45:20.037269] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:25:42.774 [2024-11-16 16:45:20.037987] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:25:48.045 00:25:48.045 filename0: (groupid=0, jobs=1): err= 0: pid=102689: Sat Nov 16 16:45:25 2024 00:25:48.045 read: IOPS=260, BW=32.6MiB/s (34.2MB/s)(163MiB/5008msec) 00:25:48.045 slat (nsec): min=5854, max=55937, avg=11998.90, stdev=5346.52 00:25:48.045 clat (usec): min=3523, max=51310, avg=11494.41, stdev=11541.39 00:25:48.045 lat (usec): min=3532, max=51329, avg=11506.41, stdev=11541.36 00:25:48.045 clat percentiles (usec): 00:25:48.045 | 1.00th=[ 4621], 5.00th=[ 5735], 10.00th=[ 6259], 20.00th=[ 6718], 00:25:48.045 | 30.00th=[ 7701], 40.00th=[ 8160], 50.00th=[ 8455], 60.00th=[ 8717], 00:25:48.045 | 70.00th=[ 8979], 80.00th=[ 9372], 90.00th=[10028], 95.00th=[49021], 00:25:48.045 | 99.00th=[50594], 99.50th=[51119], 99.90th=[51119], 99.95th=[51119], 00:25:48.045 | 99.99th=[51119] 00:25:48.045 bw ( KiB/s): min=26368, max=46080, per=29.69%, avg=33331.20, stdev=6724.35, samples=10 00:25:48.045 iops : min= 206, max= 360, avg=260.40, stdev=52.53, samples=10 00:25:48.045 lat (msec) : 4=0.92%, 10=88.81%, 20=1.76%, 50=6.97%, 100=1.53% 00:25:48.045 cpu : usr=94.37%, sys=4.19%, ctx=8, majf=0, minf=0 00:25:48.045 IO depths : 1=1.6%, 2=98.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:48.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.045 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.045 issued rwts: total=1305,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.045 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:48.045 filename0: (groupid=0, jobs=1): err= 0: pid=102690: Sat Nov 16 16:45:25 2024 00:25:48.045 read: IOPS=349, BW=43.7MiB/s (45.8MB/s)(219MiB/5007msec) 00:25:48.045 slat (nsec): min=6271, max=89963, avg=11438.84, stdev=6851.80 00:25:48.045 clat (usec): min=3315, max=48766, avg=8559.66, stdev=3639.76 00:25:48.045 lat (usec): min=3322, max=48778, avg=8571.10, stdev=3640.23 00:25:48.045 clat percentiles (usec): 00:25:48.045 | 1.00th=[ 3359], 5.00th=[ 3458], 10.00th=[ 3490], 20.00th=[ 5604], 00:25:48.045 | 30.00th=[ 7111], 40.00th=[ 7373], 50.00th=[ 7767], 60.00th=[ 9372], 00:25:48.045 | 70.00th=[11600], 80.00th=[12125], 90.00th=[12649], 95.00th=[12780], 00:25:48.045 | 99.00th=[13304], 99.50th=[13566], 99.90th=[48497], 99.95th=[49021], 00:25:48.045 | 99.99th=[49021] 00:25:48.045 bw ( KiB/s): min=32958, max=54528, per=39.81%, avg=44691.00, stdev=6666.92, samples=10 00:25:48.045 iops : min= 257, max= 426, avg=349.10, stdev=52.18, samples=10 00:25:48.045 lat (msec) : 4=18.52%, 10=42.88%, 20=38.42%, 50=0.17% 00:25:48.045 cpu : usr=90.91%, sys=6.87%, ctx=882, majf=0, minf=9 00:25:48.045 IO depths : 1=30.9%, 2=69.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:48.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.045 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.045 issued rwts: total=1749,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.045 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:48.045 filename0: (groupid=0, jobs=1): err= 0: pid=102691: Sat Nov 16 16:45:25 2024 00:25:48.045 read: IOPS=267, BW=33.4MiB/s (35.0MB/s)(167MiB/5009msec) 00:25:48.045 slat (nsec): min=5919, max=51855, avg=12381.47, stdev=5406.03 00:25:48.045 clat (usec): min=4599, max=52099, avg=11204.03, stdev=9961.27 00:25:48.045 lat (usec): min=4618, max=52110, avg=11216.41, stdev=9961.25 00:25:48.045 clat percentiles (usec): 00:25:48.045 | 1.00th=[ 5211], 5.00th=[ 5735], 10.00th=[ 6128], 20.00th=[ 6521], 00:25:48.045 | 30.00th=[ 6915], 40.00th=[ 8717], 50.00th=[ 9503], 60.00th=[ 9896], 00:25:48.045 | 70.00th=[10290], 80.00th=[10683], 90.00th=[11338], 95.00th=[47449], 00:25:48.045 | 99.00th=[51119], 99.50th=[51119], 99.90th=[51643], 99.95th=[52167], 00:25:48.045 | 99.99th=[52167] 00:25:48.045 bw ( KiB/s): min=29952, max=41728, per=30.47%, avg=34208.50, stdev=3775.19, samples=10 00:25:48.045 iops : min= 234, max= 326, avg=267.20, stdev=29.49, samples=10 00:25:48.045 lat (msec) : 10=62.81%, 20=30.92%, 50=3.96%, 100=2.32% 00:25:48.045 cpu : usr=93.25%, sys=5.19%, ctx=12, majf=0, minf=9 00:25:48.045 IO depths : 1=2.8%, 2=97.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:48.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.045 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.045 issued rwts: total=1339,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.045 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:48.045 00:25:48.045 Run status group 0 (all jobs): 00:25:48.045 READ: bw=110MiB/s (115MB/s), 32.6MiB/s-43.7MiB/s (34.2MB/s-45.8MB/s), io=549MiB (576MB), run=5007-5009msec 00:25:48.045 16:45:25 -- target/dif.sh@107 -- # destroy_subsystems 0 00:25:48.045 16:45:25 -- target/dif.sh@43 -- # local sub 00:25:48.045 16:45:25 -- target/dif.sh@45 -- # for sub in "$@" 00:25:48.045 16:45:25 -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:48.045 16:45:25 -- target/dif.sh@36 -- # local sub_id=0 00:25:48.045 16:45:25 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:48.045 16:45:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.045 16:45:25 -- common/autotest_common.sh@10 -- # set +x 00:25:48.045 16:45:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.045 16:45:25 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:48.045 16:45:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.045 16:45:25 -- common/autotest_common.sh@10 -- # set +x 00:25:48.045 16:45:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.045 16:45:25 -- target/dif.sh@109 -- # NULL_DIF=2 00:25:48.045 16:45:25 -- target/dif.sh@109 -- # bs=4k 00:25:48.045 16:45:25 -- target/dif.sh@109 -- # numjobs=8 00:25:48.045 16:45:25 -- target/dif.sh@109 -- # iodepth=16 00:25:48.045 16:45:25 -- target/dif.sh@109 -- # runtime= 00:25:48.045 16:45:25 -- target/dif.sh@109 -- # files=2 00:25:48.045 16:45:25 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:25:48.045 16:45:25 -- target/dif.sh@28 -- # local sub 00:25:48.046 16:45:25 -- target/dif.sh@30 -- # for sub in "$@" 00:25:48.046 16:45:25 -- target/dif.sh@31 -- # create_subsystem 0 00:25:48.046 16:45:25 -- target/dif.sh@18 -- # local sub_id=0 00:25:48.046 16:45:25 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:25:48.046 16:45:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.046 16:45:25 -- common/autotest_common.sh@10 -- # set +x 00:25:48.046 bdev_null0 00:25:48.046 16:45:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.046 16:45:25 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:48.046 16:45:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.046 16:45:25 -- common/autotest_common.sh@10 -- # set +x 00:25:48.046 16:45:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.046 16:45:25 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:48.046 16:45:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.046 16:45:25 -- common/autotest_common.sh@10 -- # set +x 00:25:48.046 16:45:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.046 16:45:25 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:48.046 16:45:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.046 16:45:25 -- common/autotest_common.sh@10 -- # set +x 00:25:48.046 [2024-11-16 16:45:25.430388] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:48.046 16:45:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.046 16:45:25 -- target/dif.sh@30 -- # for sub in "$@" 00:25:48.046 16:45:25 -- target/dif.sh@31 -- # create_subsystem 1 00:25:48.046 16:45:25 -- target/dif.sh@18 -- # local sub_id=1 00:25:48.046 16:45:25 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:25:48.046 16:45:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.046 16:45:25 -- common/autotest_common.sh@10 -- # set +x 00:25:48.046 bdev_null1 00:25:48.046 16:45:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.046 16:45:25 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:48.046 16:45:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.046 16:45:25 -- common/autotest_common.sh@10 -- # set +x 00:25:48.046 16:45:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.046 16:45:25 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:48.046 16:45:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.046 16:45:25 -- common/autotest_common.sh@10 -- # set +x 00:25:48.046 16:45:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.046 16:45:25 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:48.046 16:45:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.046 16:45:25 -- common/autotest_common.sh@10 -- # set +x 00:25:48.046 16:45:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.046 16:45:25 -- target/dif.sh@30 -- # for sub in "$@" 00:25:48.046 16:45:25 -- target/dif.sh@31 -- # create_subsystem 2 00:25:48.046 16:45:25 -- target/dif.sh@18 -- # local sub_id=2 00:25:48.046 16:45:25 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:25:48.046 16:45:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.046 16:45:25 -- common/autotest_common.sh@10 -- # set +x 00:25:48.046 bdev_null2 00:25:48.046 16:45:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.046 16:45:25 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:25:48.046 16:45:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.046 16:45:25 -- common/autotest_common.sh@10 -- # set +x 00:25:48.046 16:45:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.046 16:45:25 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:25:48.046 16:45:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.046 16:45:25 -- common/autotest_common.sh@10 -- # set +x 00:25:48.046 16:45:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.046 16:45:25 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:48.046 16:45:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.046 16:45:25 -- common/autotest_common.sh@10 -- # set +x 00:25:48.046 16:45:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.046 16:45:25 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:25:48.046 16:45:25 -- target/dif.sh@112 -- # fio /dev/fd/62 00:25:48.046 16:45:25 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:25:48.046 16:45:25 -- nvmf/common.sh@520 -- # config=() 00:25:48.046 16:45:25 -- nvmf/common.sh@520 -- # local subsystem config 00:25:48.046 16:45:25 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:48.046 16:45:25 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:48.046 { 00:25:48.046 "params": { 00:25:48.046 "name": "Nvme$subsystem", 00:25:48.046 "trtype": "$TEST_TRANSPORT", 00:25:48.046 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:48.046 "adrfam": "ipv4", 00:25:48.046 "trsvcid": "$NVMF_PORT", 00:25:48.046 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:48.046 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:48.046 "hdgst": ${hdgst:-false}, 00:25:48.046 "ddgst": ${ddgst:-false} 00:25:48.046 }, 00:25:48.046 "method": "bdev_nvme_attach_controller" 00:25:48.046 } 00:25:48.046 EOF 00:25:48.046 )") 00:25:48.046 16:45:25 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:48.046 16:45:25 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:48.046 16:45:25 -- target/dif.sh@82 -- # gen_fio_conf 00:25:48.046 16:45:25 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:25:48.046 16:45:25 -- target/dif.sh@54 -- # local file 00:25:48.046 16:45:25 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:48.046 16:45:25 -- target/dif.sh@56 -- # cat 00:25:48.046 16:45:25 -- common/autotest_common.sh@1328 -- # local sanitizers 00:25:48.046 16:45:25 -- nvmf/common.sh@542 -- # cat 00:25:48.046 16:45:25 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:48.046 16:45:25 -- common/autotest_common.sh@1330 -- # shift 00:25:48.046 16:45:25 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:25:48.046 16:45:25 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:48.046 16:45:25 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:48.046 16:45:25 -- common/autotest_common.sh@1334 -- # grep libasan 00:25:48.046 16:45:25 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:48.046 16:45:25 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:48.046 16:45:25 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:48.046 { 00:25:48.046 "params": { 00:25:48.046 "name": "Nvme$subsystem", 00:25:48.046 "trtype": "$TEST_TRANSPORT", 00:25:48.046 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:48.046 "adrfam": "ipv4", 00:25:48.046 "trsvcid": "$NVMF_PORT", 00:25:48.046 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:48.046 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:48.046 "hdgst": ${hdgst:-false}, 00:25:48.046 "ddgst": ${ddgst:-false} 00:25:48.046 }, 00:25:48.046 "method": "bdev_nvme_attach_controller" 00:25:48.046 } 00:25:48.046 EOF 00:25:48.046 )") 00:25:48.046 16:45:25 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:48.046 16:45:25 -- target/dif.sh@72 -- # (( file <= files )) 00:25:48.046 16:45:25 -- nvmf/common.sh@542 -- # cat 00:25:48.046 16:45:25 -- target/dif.sh@73 -- # cat 00:25:48.046 16:45:25 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:48.046 16:45:25 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:48.046 { 00:25:48.046 "params": { 00:25:48.046 "name": "Nvme$subsystem", 00:25:48.046 "trtype": "$TEST_TRANSPORT", 00:25:48.046 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:48.046 "adrfam": "ipv4", 00:25:48.046 "trsvcid": "$NVMF_PORT", 00:25:48.046 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:48.046 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:48.046 "hdgst": ${hdgst:-false}, 00:25:48.046 "ddgst": ${ddgst:-false} 00:25:48.046 }, 00:25:48.046 "method": "bdev_nvme_attach_controller" 00:25:48.046 } 00:25:48.046 EOF 00:25:48.046 )") 00:25:48.046 16:45:25 -- nvmf/common.sh@542 -- # cat 00:25:48.046 16:45:25 -- target/dif.sh@72 -- # (( file++ )) 00:25:48.046 16:45:25 -- target/dif.sh@72 -- # (( file <= files )) 00:25:48.046 16:45:25 -- target/dif.sh@73 -- # cat 00:25:48.046 16:45:25 -- target/dif.sh@72 -- # (( file++ )) 00:25:48.046 16:45:25 -- target/dif.sh@72 -- # (( file <= files )) 00:25:48.046 16:45:25 -- nvmf/common.sh@544 -- # jq . 00:25:48.046 16:45:25 -- nvmf/common.sh@545 -- # IFS=, 00:25:48.046 16:45:25 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:48.046 "params": { 00:25:48.046 "name": "Nvme0", 00:25:48.046 "trtype": "tcp", 00:25:48.046 "traddr": "10.0.0.2", 00:25:48.046 "adrfam": "ipv4", 00:25:48.046 "trsvcid": "4420", 00:25:48.046 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:48.046 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:48.046 "hdgst": false, 00:25:48.046 "ddgst": false 00:25:48.046 }, 00:25:48.046 "method": "bdev_nvme_attach_controller" 00:25:48.046 },{ 00:25:48.046 "params": { 00:25:48.046 "name": "Nvme1", 00:25:48.046 "trtype": "tcp", 00:25:48.046 "traddr": "10.0.0.2", 00:25:48.046 "adrfam": "ipv4", 00:25:48.046 "trsvcid": "4420", 00:25:48.046 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:48.046 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:48.046 "hdgst": false, 00:25:48.046 "ddgst": false 00:25:48.046 }, 00:25:48.046 "method": "bdev_nvme_attach_controller" 00:25:48.046 },{ 00:25:48.046 "params": { 00:25:48.046 "name": "Nvme2", 00:25:48.046 "trtype": "tcp", 00:25:48.046 "traddr": "10.0.0.2", 00:25:48.046 "adrfam": "ipv4", 00:25:48.046 "trsvcid": "4420", 00:25:48.046 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:48.046 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:48.046 "hdgst": false, 00:25:48.046 "ddgst": false 00:25:48.046 }, 00:25:48.046 "method": "bdev_nvme_attach_controller" 00:25:48.047 }' 00:25:48.305 16:45:25 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:48.305 16:45:25 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:48.305 16:45:25 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:48.305 16:45:25 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:48.305 16:45:25 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:25:48.305 16:45:25 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:48.305 16:45:25 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:48.305 16:45:25 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:48.305 16:45:25 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:48.305 16:45:25 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:48.305 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:25:48.305 ... 00:25:48.305 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:25:48.305 ... 00:25:48.305 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:25:48.305 ... 00:25:48.305 fio-3.35 00:25:48.305 Starting 24 threads 00:25:49.241 [2024-11-16 16:45:26.366631] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:25:49.241 [2024-11-16 16:45:26.367510] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:25:59.232 00:25:59.232 filename0: (groupid=0, jobs=1): err= 0: pid=102791: Sat Nov 16 16:45:36 2024 00:25:59.232 read: IOPS=309, BW=1239KiB/s (1269kB/s)(12.2MiB/10045msec) 00:25:59.232 slat (usec): min=3, max=4031, avg=12.91, stdev=80.75 00:25:59.232 clat (msec): min=3, max=117, avg=51.53, stdev=16.02 00:25:59.232 lat (msec): min=3, max=117, avg=51.54, stdev=16.02 00:25:59.232 clat percentiles (msec): 00:25:59.232 | 1.00th=[ 7], 5.00th=[ 32], 10.00th=[ 35], 20.00th=[ 39], 00:25:59.232 | 30.00th=[ 42], 40.00th=[ 46], 50.00th=[ 50], 60.00th=[ 55], 00:25:59.232 | 70.00th=[ 59], 80.00th=[ 65], 90.00th=[ 72], 95.00th=[ 80], 00:25:59.232 | 99.00th=[ 97], 99.50th=[ 108], 99.90th=[ 108], 99.95th=[ 118], 00:25:59.232 | 99.99th=[ 118] 00:25:59.232 bw ( KiB/s): min= 946, max= 1536, per=4.96%, avg=1238.10, stdev=180.68, samples=20 00:25:59.232 iops : min= 236, max= 384, avg=309.50, stdev=45.21, samples=20 00:25:59.232 lat (msec) : 4=0.03%, 10=1.51%, 20=0.51%, 50=50.06%, 100=47.11% 00:25:59.232 lat (msec) : 250=0.77% 00:25:59.232 cpu : usr=48.00%, sys=0.66%, ctx=1015, majf=0, minf=9 00:25:59.232 IO depths : 1=0.8%, 2=2.1%, 4=9.7%, 8=74.9%, 16=12.5%, 32=0.0%, >=64=0.0% 00:25:59.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.232 complete : 0=0.0%, 4=90.0%, 8=5.2%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.232 issued rwts: total=3112,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:59.232 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:59.232 filename0: (groupid=0, jobs=1): err= 0: pid=102792: Sat Nov 16 16:45:36 2024 00:25:59.232 read: IOPS=254, BW=1018KiB/s (1043kB/s)(9.95MiB/10009msec) 00:25:59.232 slat (usec): min=4, max=8045, avg=20.61, stdev=238.57 00:25:59.232 clat (msec): min=25, max=130, avg=62.73, stdev=19.63 00:25:59.232 lat (msec): min=25, max=130, avg=62.76, stdev=19.64 00:25:59.232 clat percentiles (msec): 00:25:59.232 | 1.00th=[ 32], 5.00th=[ 36], 10.00th=[ 37], 20.00th=[ 47], 00:25:59.233 | 30.00th=[ 52], 40.00th=[ 57], 50.00th=[ 60], 60.00th=[ 63], 00:25:59.233 | 70.00th=[ 71], 80.00th=[ 75], 90.00th=[ 92], 95.00th=[ 106], 00:25:59.233 | 99.00th=[ 118], 99.50th=[ 122], 99.90th=[ 131], 99.95th=[ 131], 00:25:59.233 | 99.99th=[ 131] 00:25:59.233 bw ( KiB/s): min= 744, max= 1200, per=4.05%, avg=1012.80, stdev=121.25, samples=20 00:25:59.233 iops : min= 186, max= 300, avg=253.20, stdev=30.31, samples=20 00:25:59.233 lat (msec) : 50=26.77%, 100=66.88%, 250=6.36% 00:25:59.233 cpu : usr=38.92%, sys=0.55%, ctx=1138, majf=0, minf=9 00:25:59.233 IO depths : 1=1.5%, 2=3.1%, 4=10.4%, 8=73.0%, 16=12.0%, 32=0.0%, >=64=0.0% 00:25:59.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.233 complete : 0=0.0%, 4=90.3%, 8=5.0%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.233 issued rwts: total=2548,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:59.233 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:59.233 filename0: (groupid=0, jobs=1): err= 0: pid=102793: Sat Nov 16 16:45:36 2024 00:25:59.233 read: IOPS=253, BW=1012KiB/s (1037kB/s)(9.91MiB/10024msec) 00:25:59.233 slat (usec): min=5, max=8024, avg=22.23, stdev=263.90 00:25:59.233 clat (msec): min=24, max=135, avg=63.07, stdev=20.85 00:25:59.233 lat (msec): min=24, max=135, avg=63.09, stdev=20.85 00:25:59.233 clat percentiles (msec): 00:25:59.233 | 1.00th=[ 26], 5.00th=[ 35], 10.00th=[ 37], 20.00th=[ 47], 00:25:59.233 | 30.00th=[ 50], 40.00th=[ 58], 50.00th=[ 61], 60.00th=[ 63], 00:25:59.233 | 70.00th=[ 72], 80.00th=[ 82], 90.00th=[ 94], 95.00th=[ 107], 00:25:59.233 | 99.00th=[ 120], 99.50th=[ 121], 99.90th=[ 136], 99.95th=[ 136], 00:25:59.233 | 99.99th=[ 136] 00:25:59.233 bw ( KiB/s): min= 768, max= 1224, per=4.03%, avg=1007.90, stdev=153.14, samples=20 00:25:59.233 iops : min= 192, max= 306, avg=251.95, stdev=38.25, samples=20 00:25:59.233 lat (msec) : 50=30.90%, 100=63.38%, 250=5.72% 00:25:59.233 cpu : usr=35.38%, sys=0.52%, ctx=962, majf=0, minf=9 00:25:59.233 IO depths : 1=1.7%, 2=3.5%, 4=12.2%, 8=71.2%, 16=11.4%, 32=0.0%, >=64=0.0% 00:25:59.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.233 complete : 0=0.0%, 4=90.3%, 8=4.6%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.233 issued rwts: total=2537,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:59.233 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:59.233 filename0: (groupid=0, jobs=1): err= 0: pid=102794: Sat Nov 16 16:45:36 2024 00:25:59.233 read: IOPS=237, BW=950KiB/s (972kB/s)(9516KiB/10020msec) 00:25:59.233 slat (nsec): min=4920, max=47885, avg=13103.05, stdev=7953.26 00:25:59.233 clat (msec): min=19, max=125, avg=67.27, stdev=17.62 00:25:59.233 lat (msec): min=19, max=125, avg=67.28, stdev=17.62 00:25:59.233 clat percentiles (msec): 00:25:59.233 | 1.00th=[ 36], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 55], 00:25:59.233 | 30.00th=[ 58], 40.00th=[ 60], 50.00th=[ 64], 60.00th=[ 71], 00:25:59.233 | 70.00th=[ 73], 80.00th=[ 81], 90.00th=[ 95], 95.00th=[ 101], 00:25:59.233 | 99.00th=[ 118], 99.50th=[ 121], 99.90th=[ 126], 99.95th=[ 126], 00:25:59.233 | 99.99th=[ 126] 00:25:59.233 bw ( KiB/s): min= 768, max= 1152, per=3.78%, avg=945.20, stdev=98.30, samples=20 00:25:59.233 iops : min= 192, max= 288, avg=236.30, stdev=24.57, samples=20 00:25:59.233 lat (msec) : 20=0.29%, 50=14.21%, 100=80.92%, 250=4.58% 00:25:59.233 cpu : usr=39.48%, sys=0.46%, ctx=1014, majf=0, minf=9 00:25:59.233 IO depths : 1=2.1%, 2=4.7%, 4=13.7%, 8=68.0%, 16=11.5%, 32=0.0%, >=64=0.0% 00:25:59.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.233 complete : 0=0.0%, 4=91.0%, 8=4.4%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.233 issued rwts: total=2379,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:59.233 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:59.233 filename0: (groupid=0, jobs=1): err= 0: pid=102795: Sat Nov 16 16:45:36 2024 00:25:59.233 read: IOPS=229, BW=920KiB/s (942kB/s)(9216KiB/10020msec) 00:25:59.233 slat (usec): min=3, max=8022, avg=18.17, stdev=186.74 00:25:59.233 clat (msec): min=33, max=152, avg=69.42, stdev=19.51 00:25:59.233 lat (msec): min=33, max=152, avg=69.44, stdev=19.50 00:25:59.233 clat percentiles (msec): 00:25:59.233 | 1.00th=[ 36], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 56], 00:25:59.233 | 30.00th=[ 60], 40.00th=[ 61], 50.00th=[ 64], 60.00th=[ 72], 00:25:59.233 | 70.00th=[ 73], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 108], 00:25:59.233 | 99.00th=[ 130], 99.50th=[ 138], 99.90th=[ 153], 99.95th=[ 153], 00:25:59.233 | 99.99th=[ 153] 00:25:59.233 bw ( KiB/s): min= 688, max= 1168, per=3.67%, avg=917.20, stdev=120.86, samples=20 00:25:59.233 iops : min= 172, max= 292, avg=229.30, stdev=30.22, samples=20 00:25:59.233 lat (msec) : 50=13.11%, 100=80.30%, 250=6.60% 00:25:59.233 cpu : usr=34.63%, sys=0.34%, ctx=904, majf=0, minf=9 00:25:59.233 IO depths : 1=1.7%, 2=3.9%, 4=12.5%, 8=70.0%, 16=11.9%, 32=0.0%, >=64=0.0% 00:25:59.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.233 complete : 0=0.0%, 4=90.4%, 8=5.0%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.233 issued rwts: total=2304,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:59.233 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:59.233 filename0: (groupid=0, jobs=1): err= 0: pid=102796: Sat Nov 16 16:45:36 2024 00:25:59.233 read: IOPS=235, BW=944KiB/s (967kB/s)(9448KiB/10009msec) 00:25:59.233 slat (usec): min=4, max=8021, avg=19.02, stdev=202.35 00:25:59.233 clat (msec): min=29, max=149, avg=67.67, stdev=17.45 00:25:59.233 lat (msec): min=29, max=149, avg=67.69, stdev=17.45 00:25:59.233 clat percentiles (msec): 00:25:59.233 | 1.00th=[ 36], 5.00th=[ 43], 10.00th=[ 48], 20.00th=[ 55], 00:25:59.233 | 30.00th=[ 58], 40.00th=[ 61], 50.00th=[ 65], 60.00th=[ 71], 00:25:59.233 | 70.00th=[ 75], 80.00th=[ 82], 90.00th=[ 92], 95.00th=[ 101], 00:25:59.233 | 99.00th=[ 115], 99.50th=[ 125], 99.90th=[ 150], 99.95th=[ 150], 00:25:59.233 | 99.99th=[ 150] 00:25:59.233 bw ( KiB/s): min= 640, max= 1200, per=3.76%, avg=939.95, stdev=123.29, samples=19 00:25:59.233 iops : min= 160, max= 300, avg=234.95, stdev=30.81, samples=19 00:25:59.233 lat (msec) : 50=13.46%, 100=82.09%, 250=4.45% 00:25:59.233 cpu : usr=43.94%, sys=0.72%, ctx=1156, majf=0, minf=9 00:25:59.233 IO depths : 1=2.8%, 2=6.6%, 4=17.4%, 8=62.9%, 16=10.3%, 32=0.0%, >=64=0.0% 00:25:59.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.233 complete : 0=0.0%, 4=92.2%, 8=2.8%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.233 issued rwts: total=2362,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:59.233 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:59.233 filename0: (groupid=0, jobs=1): err= 0: pid=102797: Sat Nov 16 16:45:36 2024 00:25:59.233 read: IOPS=273, BW=1095KiB/s (1122kB/s)(10.7MiB/10035msec) 00:25:59.233 slat (usec): min=3, max=2993, avg=13.11, stdev=57.33 00:25:59.233 clat (msec): min=23, max=131, avg=58.32, stdev=17.43 00:25:59.233 lat (msec): min=23, max=131, avg=58.33, stdev=17.43 00:25:59.233 clat percentiles (msec): 00:25:59.233 | 1.00th=[ 28], 5.00th=[ 36], 10.00th=[ 37], 20.00th=[ 45], 00:25:59.233 | 30.00th=[ 48], 40.00th=[ 52], 50.00th=[ 58], 60.00th=[ 61], 00:25:59.233 | 70.00th=[ 64], 80.00th=[ 72], 90.00th=[ 83], 95.00th=[ 93], 00:25:59.233 | 99.00th=[ 110], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 132], 00:25:59.233 | 99.99th=[ 132] 00:25:59.233 bw ( KiB/s): min= 848, max= 1328, per=4.37%, avg=1092.80, stdev=138.74, samples=20 00:25:59.233 iops : min= 212, max= 332, avg=273.20, stdev=34.69, samples=20 00:25:59.233 lat (msec) : 50=38.03%, 100=59.90%, 250=2.07% 00:25:59.233 cpu : usr=35.93%, sys=0.68%, ctx=1074, majf=0, minf=9 00:25:59.233 IO depths : 1=0.8%, 2=2.0%, 4=7.9%, 8=76.1%, 16=13.2%, 32=0.0%, >=64=0.0% 00:25:59.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.233 complete : 0=0.0%, 4=89.7%, 8=6.0%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.233 issued rwts: total=2748,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:59.233 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:59.233 filename0: (groupid=0, jobs=1): err= 0: pid=102798: Sat Nov 16 16:45:36 2024 00:25:59.233 read: IOPS=247, BW=989KiB/s (1013kB/s)(9908KiB/10017msec) 00:25:59.233 slat (usec): min=3, max=8029, avg=22.98, stdev=278.78 00:25:59.233 clat (msec): min=29, max=135, avg=64.56, stdev=19.20 00:25:59.233 lat (msec): min=29, max=135, avg=64.58, stdev=19.21 00:25:59.233 clat percentiles (msec): 00:25:59.233 | 1.00th=[ 34], 5.00th=[ 36], 10.00th=[ 42], 20.00th=[ 48], 00:25:59.233 | 30.00th=[ 55], 40.00th=[ 60], 50.00th=[ 61], 60.00th=[ 69], 00:25:59.233 | 70.00th=[ 72], 80.00th=[ 80], 90.00th=[ 89], 95.00th=[ 104], 00:25:59.233 | 99.00th=[ 124], 99.50th=[ 131], 99.90th=[ 136], 99.95th=[ 136], 00:25:59.233 | 99.99th=[ 136] 00:25:59.233 bw ( KiB/s): min= 728, max= 1328, per=3.94%, avg=984.40, stdev=134.91, samples=20 00:25:59.233 iops : min= 182, max= 332, avg=246.10, stdev=33.73, samples=20 00:25:59.233 lat (msec) : 50=25.31%, 100=69.64%, 250=5.05% 00:25:59.233 cpu : usr=34.08%, sys=0.50%, ctx=941, majf=0, minf=9 00:25:59.233 IO depths : 1=1.3%, 2=2.9%, 4=11.0%, 8=72.5%, 16=12.3%, 32=0.0%, >=64=0.0% 00:25:59.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.233 complete : 0=0.0%, 4=90.2%, 8=5.2%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.233 issued rwts: total=2477,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:59.233 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:59.233 filename1: (groupid=0, jobs=1): err= 0: pid=102799: Sat Nov 16 16:45:36 2024 00:25:59.233 read: IOPS=286, BW=1145KiB/s (1173kB/s)(11.2MiB/10037msec) 00:25:59.233 slat (usec): min=4, max=8021, avg=16.24, stdev=167.45 00:25:59.233 clat (msec): min=15, max=131, avg=55.69, stdev=17.49 00:25:59.233 lat (msec): min=15, max=131, avg=55.71, stdev=17.49 00:25:59.233 clat percentiles (msec): 00:25:59.233 | 1.00th=[ 27], 5.00th=[ 33], 10.00th=[ 35], 20.00th=[ 40], 00:25:59.233 | 30.00th=[ 46], 40.00th=[ 50], 50.00th=[ 56], 60.00th=[ 61], 00:25:59.233 | 70.00th=[ 62], 80.00th=[ 70], 90.00th=[ 81], 95.00th=[ 85], 00:25:59.233 | 99.00th=[ 108], 99.50th=[ 120], 99.90th=[ 132], 99.95th=[ 132], 00:25:59.233 | 99.99th=[ 132] 00:25:59.233 bw ( KiB/s): min= 872, max= 1536, per=4.58%, avg=1143.20, stdev=166.90, samples=20 00:25:59.233 iops : min= 218, max= 384, avg=285.80, stdev=41.73, samples=20 00:25:59.233 lat (msec) : 20=0.56%, 50=41.37%, 100=56.26%, 250=1.81% 00:25:59.233 cpu : usr=38.80%, sys=0.60%, ctx=1044, majf=0, minf=9 00:25:59.233 IO depths : 1=0.5%, 2=1.0%, 4=6.9%, 8=78.6%, 16=13.0%, 32=0.0%, >=64=0.0% 00:25:59.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.233 complete : 0=0.0%, 4=89.3%, 8=6.1%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.233 issued rwts: total=2874,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:59.234 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:59.234 filename1: (groupid=0, jobs=1): err= 0: pid=102800: Sat Nov 16 16:45:36 2024 00:25:59.234 read: IOPS=300, BW=1203KiB/s (1232kB/s)(11.8MiB/10033msec) 00:25:59.234 slat (usec): min=4, max=8018, avg=17.35, stdev=207.51 00:25:59.234 clat (msec): min=6, max=126, avg=53.03, stdev=17.51 00:25:59.234 lat (msec): min=6, max=126, avg=53.05, stdev=17.51 00:25:59.234 clat percentiles (msec): 00:25:59.234 | 1.00th=[ 16], 5.00th=[ 33], 10.00th=[ 36], 20.00th=[ 39], 00:25:59.234 | 30.00th=[ 42], 40.00th=[ 45], 50.00th=[ 49], 60.00th=[ 56], 00:25:59.234 | 70.00th=[ 61], 80.00th=[ 68], 90.00th=[ 74], 95.00th=[ 85], 00:25:59.234 | 99.00th=[ 109], 99.50th=[ 127], 99.90th=[ 127], 99.95th=[ 127], 00:25:59.234 | 99.99th=[ 127] 00:25:59.234 bw ( KiB/s): min= 800, max= 1458, per=4.81%, avg=1202.90, stdev=159.14, samples=20 00:25:59.234 iops : min= 200, max= 364, avg=300.70, stdev=39.74, samples=20 00:25:59.234 lat (msec) : 10=0.53%, 20=0.53%, 50=51.16%, 100=46.42%, 250=1.36% 00:25:59.234 cpu : usr=42.18%, sys=0.62%, ctx=1286, majf=0, minf=9 00:25:59.234 IO depths : 1=0.9%, 2=1.9%, 4=8.3%, 8=75.9%, 16=13.0%, 32=0.0%, >=64=0.0% 00:25:59.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.234 complete : 0=0.0%, 4=89.6%, 8=6.2%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.234 issued rwts: total=3018,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:59.234 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:59.234 filename1: (groupid=0, jobs=1): err= 0: pid=102801: Sat Nov 16 16:45:36 2024 00:25:59.234 read: IOPS=253, BW=1013KiB/s (1037kB/s)(9.91MiB/10016msec) 00:25:59.234 slat (usec): min=3, max=8022, avg=20.01, stdev=238.56 00:25:59.234 clat (msec): min=24, max=150, avg=63.07, stdev=20.51 00:25:59.234 lat (msec): min=24, max=150, avg=63.09, stdev=20.51 00:25:59.234 clat percentiles (msec): 00:25:59.234 | 1.00th=[ 30], 5.00th=[ 35], 10.00th=[ 37], 20.00th=[ 47], 00:25:59.234 | 30.00th=[ 50], 40.00th=[ 58], 50.00th=[ 61], 60.00th=[ 63], 00:25:59.234 | 70.00th=[ 72], 80.00th=[ 83], 90.00th=[ 95], 95.00th=[ 104], 00:25:59.234 | 99.00th=[ 118], 99.50th=[ 121], 99.90th=[ 150], 99.95th=[ 150], 00:25:59.234 | 99.99th=[ 150] 00:25:59.234 bw ( KiB/s): min= 736, max= 1352, per=4.04%, avg=1008.05, stdev=159.54, samples=20 00:25:59.234 iops : min= 184, max= 338, avg=252.00, stdev=39.89, samples=20 00:25:59.234 lat (msec) : 50=30.21%, 100=64.75%, 250=5.05% 00:25:59.234 cpu : usr=33.85%, sys=0.53%, ctx=898, majf=0, minf=9 00:25:59.234 IO depths : 1=1.2%, 2=2.6%, 4=9.5%, 8=74.1%, 16=12.5%, 32=0.0%, >=64=0.0% 00:25:59.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.234 complete : 0=0.0%, 4=90.0%, 8=5.6%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.234 issued rwts: total=2536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:59.234 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:59.234 filename1: (groupid=0, jobs=1): err= 0: pid=102802: Sat Nov 16 16:45:36 2024 00:25:59.234 read: IOPS=277, BW=1112KiB/s (1138kB/s)(10.9MiB/10024msec) 00:25:59.234 slat (usec): min=3, max=4025, avg=15.90, stdev=121.60 00:25:59.234 clat (msec): min=25, max=143, avg=57.46, stdev=18.05 00:25:59.234 lat (msec): min=25, max=143, avg=57.48, stdev=18.05 00:25:59.234 clat percentiles (msec): 00:25:59.234 | 1.00th=[ 28], 5.00th=[ 34], 10.00th=[ 37], 20.00th=[ 41], 00:25:59.234 | 30.00th=[ 46], 40.00th=[ 51], 50.00th=[ 57], 60.00th=[ 61], 00:25:59.234 | 70.00th=[ 64], 80.00th=[ 71], 90.00th=[ 84], 95.00th=[ 92], 00:25:59.234 | 99.00th=[ 107], 99.50th=[ 120], 99.90th=[ 144], 99.95th=[ 144], 00:25:59.234 | 99.99th=[ 144] 00:25:59.234 bw ( KiB/s): min= 848, max= 1424, per=4.43%, avg=1107.55, stdev=174.07, samples=20 00:25:59.234 iops : min= 212, max= 356, avg=276.85, stdev=43.52, samples=20 00:25:59.234 lat (msec) : 50=39.52%, 100=58.97%, 250=1.51% 00:25:59.234 cpu : usr=42.45%, sys=0.57%, ctx=1314, majf=0, minf=9 00:25:59.234 IO depths : 1=0.8%, 2=1.9%, 4=9.0%, 8=75.2%, 16=13.1%, 32=0.0%, >=64=0.0% 00:25:59.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.234 complete : 0=0.0%, 4=89.8%, 8=6.0%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.234 issued rwts: total=2786,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:59.234 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:59.234 filename1: (groupid=0, jobs=1): err= 0: pid=102803: Sat Nov 16 16:45:36 2024 00:25:59.234 read: IOPS=243, BW=973KiB/s (996kB/s)(9744KiB/10016msec) 00:25:59.234 slat (usec): min=4, max=8018, avg=19.13, stdev=229.44 00:25:59.234 clat (msec): min=17, max=147, avg=65.65, stdev=20.02 00:25:59.234 lat (msec): min=17, max=147, avg=65.67, stdev=20.02 00:25:59.234 clat percentiles (msec): 00:25:59.234 | 1.00th=[ 31], 5.00th=[ 36], 10.00th=[ 46], 20.00th=[ 48], 00:25:59.234 | 30.00th=[ 56], 40.00th=[ 60], 50.00th=[ 62], 60.00th=[ 70], 00:25:59.234 | 70.00th=[ 72], 80.00th=[ 82], 90.00th=[ 96], 95.00th=[ 106], 00:25:59.234 | 99.00th=[ 121], 99.50th=[ 144], 99.90th=[ 148], 99.95th=[ 148], 00:25:59.234 | 99.99th=[ 148] 00:25:59.234 bw ( KiB/s): min= 768, max= 1224, per=3.87%, avg=968.10, stdev=120.23, samples=20 00:25:59.234 iops : min= 192, max= 306, avg=242.00, stdev=30.02, samples=20 00:25:59.234 lat (msec) : 20=0.08%, 50=24.71%, 100=69.25%, 250=5.95% 00:25:59.234 cpu : usr=32.70%, sys=0.47%, ctx=858, majf=0, minf=9 00:25:59.234 IO depths : 1=1.3%, 2=3.0%, 4=10.7%, 8=72.5%, 16=12.6%, 32=0.0%, >=64=0.0% 00:25:59.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.234 complete : 0=0.0%, 4=90.0%, 8=5.7%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.234 issued rwts: total=2436,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:59.234 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:59.234 filename1: (groupid=0, jobs=1): err= 0: pid=102804: Sat Nov 16 16:45:36 2024 00:25:59.234 read: IOPS=238, BW=954KiB/s (977kB/s)(9544KiB/10007msec) 00:25:59.234 slat (usec): min=4, max=8024, avg=22.99, stdev=283.76 00:25:59.234 clat (msec): min=10, max=155, avg=66.96, stdev=19.42 00:25:59.234 lat (msec): min=10, max=155, avg=66.99, stdev=19.42 00:25:59.234 clat percentiles (msec): 00:25:59.234 | 1.00th=[ 33], 5.00th=[ 37], 10.00th=[ 46], 20.00th=[ 52], 00:25:59.234 | 30.00th=[ 60], 40.00th=[ 61], 50.00th=[ 61], 60.00th=[ 71], 00:25:59.234 | 70.00th=[ 72], 80.00th=[ 84], 90.00th=[ 90], 95.00th=[ 108], 00:25:59.234 | 99.00th=[ 127], 99.50th=[ 132], 99.90th=[ 157], 99.95th=[ 157], 00:25:59.234 | 99.99th=[ 157] 00:25:59.234 bw ( KiB/s): min= 768, max= 1146, per=3.81%, avg=952.10, stdev=99.68, samples=20 00:25:59.234 iops : min= 192, max= 286, avg=238.00, stdev=24.87, samples=20 00:25:59.234 lat (msec) : 20=0.67%, 50=18.02%, 100=74.64%, 250=6.66% 00:25:59.234 cpu : usr=32.87%, sys=0.35%, ctx=858, majf=0, minf=9 00:25:59.234 IO depths : 1=1.8%, 2=4.7%, 4=14.2%, 8=67.8%, 16=11.5%, 32=0.0%, >=64=0.0% 00:25:59.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.234 complete : 0=0.0%, 4=91.2%, 8=4.0%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.234 issued rwts: total=2386,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:59.234 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:59.234 filename1: (groupid=0, jobs=1): err= 0: pid=102805: Sat Nov 16 16:45:36 2024 00:25:59.234 read: IOPS=281, BW=1126KiB/s (1153kB/s)(11.0MiB/10042msec) 00:25:59.234 slat (usec): min=3, max=5999, avg=13.52, stdev=112.86 00:25:59.234 clat (msec): min=15, max=142, avg=56.64, stdev=17.85 00:25:59.234 lat (msec): min=15, max=142, avg=56.65, stdev=17.85 00:25:59.234 clat percentiles (msec): 00:25:59.234 | 1.00th=[ 21], 5.00th=[ 33], 10.00th=[ 36], 20.00th=[ 40], 00:25:59.234 | 30.00th=[ 47], 40.00th=[ 51], 50.00th=[ 57], 60.00th=[ 60], 00:25:59.234 | 70.00th=[ 63], 80.00th=[ 71], 90.00th=[ 83], 95.00th=[ 92], 00:25:59.234 | 99.00th=[ 107], 99.50th=[ 110], 99.90th=[ 132], 99.95th=[ 132], 00:25:59.234 | 99.99th=[ 144] 00:25:59.234 bw ( KiB/s): min= 864, max= 1410, per=4.50%, avg=1123.70, stdev=146.05, samples=20 00:25:59.234 iops : min= 216, max= 352, avg=280.90, stdev=36.46, samples=20 00:25:59.234 lat (msec) : 20=0.57%, 50=38.11%, 100=59.48%, 250=1.84% 00:25:59.234 cpu : usr=40.97%, sys=0.60%, ctx=1227, majf=0, minf=9 00:25:59.234 IO depths : 1=1.0%, 2=2.1%, 4=9.4%, 8=74.8%, 16=12.7%, 32=0.0%, >=64=0.0% 00:25:59.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.234 complete : 0=0.0%, 4=90.1%, 8=5.5%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.234 issued rwts: total=2826,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:59.234 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:59.234 filename1: (groupid=0, jobs=1): err= 0: pid=102806: Sat Nov 16 16:45:36 2024 00:25:59.234 read: IOPS=270, BW=1083KiB/s (1109kB/s)(10.6MiB/10018msec) 00:25:59.234 slat (usec): min=3, max=7981, avg=18.94, stdev=211.16 00:25:59.234 clat (msec): min=22, max=272, avg=58.99, stdev=24.53 00:25:59.234 lat (msec): min=22, max=272, avg=59.01, stdev=24.53 00:25:59.234 clat percentiles (msec): 00:25:59.234 | 1.00th=[ 27], 5.00th=[ 35], 10.00th=[ 37], 20.00th=[ 41], 00:25:59.234 | 30.00th=[ 45], 40.00th=[ 49], 50.00th=[ 56], 60.00th=[ 60], 00:25:59.234 | 70.00th=[ 66], 80.00th=[ 73], 90.00th=[ 88], 95.00th=[ 101], 00:25:59.234 | 99.00th=[ 121], 99.50th=[ 243], 99.90th=[ 243], 99.95th=[ 271], 00:25:59.234 | 99.99th=[ 271] 00:25:59.234 bw ( KiB/s): min= 640, max= 1280, per=4.32%, avg=1078.40, stdev=165.68, samples=20 00:25:59.234 iops : min= 160, max= 320, avg=269.60, stdev=41.42, samples=20 00:25:59.234 lat (msec) : 50=41.15%, 100=53.80%, 250=4.98%, 500=0.07% 00:25:59.234 cpu : usr=41.50%, sys=0.61%, ctx=1216, majf=0, minf=9 00:25:59.234 IO depths : 1=0.8%, 2=2.0%, 4=8.8%, 8=75.6%, 16=12.9%, 32=0.0%, >=64=0.0% 00:25:59.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.234 complete : 0=0.0%, 4=89.9%, 8=5.6%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.234 issued rwts: total=2712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:59.234 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:59.234 filename2: (groupid=0, jobs=1): err= 0: pid=102807: Sat Nov 16 16:45:36 2024 00:25:59.234 read: IOPS=304, BW=1218KiB/s (1247kB/s)(12.0MiB/10059msec) 00:25:59.234 slat (usec): min=3, max=8030, avg=24.77, stdev=289.79 00:25:59.234 clat (msec): min=4, max=127, avg=52.30, stdev=18.48 00:25:59.234 lat (msec): min=4, max=127, avg=52.33, stdev=18.48 00:25:59.234 clat percentiles (msec): 00:25:59.234 | 1.00th=[ 6], 5.00th=[ 32], 10.00th=[ 35], 20.00th=[ 38], 00:25:59.234 | 30.00th=[ 41], 40.00th=[ 45], 50.00th=[ 50], 60.00th=[ 55], 00:25:59.234 | 70.00th=[ 59], 80.00th=[ 66], 90.00th=[ 75], 95.00th=[ 91], 00:25:59.234 | 99.00th=[ 104], 99.50th=[ 116], 99.90th=[ 128], 99.95th=[ 128], 00:25:59.234 | 99.99th=[ 128] 00:25:59.234 bw ( KiB/s): min= 912, max= 1792, per=4.88%, avg=1218.40, stdev=221.90, samples=20 00:25:59.235 iops : min= 228, max= 448, avg=304.60, stdev=55.48, samples=20 00:25:59.235 lat (msec) : 10=1.57%, 20=0.52%, 50=49.15%, 100=47.49%, 250=1.27% 00:25:59.235 cpu : usr=44.01%, sys=0.63%, ctx=1200, majf=0, minf=9 00:25:59.235 IO depths : 1=0.9%, 2=2.1%, 4=8.6%, 8=75.7%, 16=12.7%, 32=0.0%, >=64=0.0% 00:25:59.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.235 complete : 0=0.0%, 4=89.6%, 8=6.0%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.235 issued rwts: total=3062,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:59.235 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:59.235 filename2: (groupid=0, jobs=1): err= 0: pid=102808: Sat Nov 16 16:45:36 2024 00:25:59.235 read: IOPS=237, BW=950KiB/s (973kB/s)(9508KiB/10005msec) 00:25:59.235 slat (usec): min=4, max=7031, avg=18.51, stdev=185.31 00:25:59.235 clat (msec): min=9, max=141, avg=67.21, stdev=19.09 00:25:59.235 lat (msec): min=9, max=141, avg=67.23, stdev=19.09 00:25:59.235 clat percentiles (msec): 00:25:59.235 | 1.00th=[ 33], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 55], 00:25:59.235 | 30.00th=[ 58], 40.00th=[ 59], 50.00th=[ 62], 60.00th=[ 69], 00:25:59.235 | 70.00th=[ 72], 80.00th=[ 81], 90.00th=[ 94], 95.00th=[ 103], 00:25:59.235 | 99.00th=[ 131], 99.50th=[ 136], 99.90th=[ 142], 99.95th=[ 142], 00:25:59.235 | 99.99th=[ 142] 00:25:59.235 bw ( KiB/s): min= 640, max= 1152, per=3.76%, avg=940.21, stdev=106.88, samples=19 00:25:59.235 iops : min= 160, max= 288, avg=235.05, stdev=26.72, samples=19 00:25:59.235 lat (msec) : 10=0.67%, 50=13.88%, 100=80.31%, 250=5.13% 00:25:59.235 cpu : usr=37.58%, sys=0.47%, ctx=1138, majf=0, minf=9 00:25:59.235 IO depths : 1=2.5%, 2=5.8%, 4=15.5%, 8=65.3%, 16=10.9%, 32=0.0%, >=64=0.0% 00:25:59.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.235 complete : 0=0.0%, 4=91.7%, 8=3.5%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.235 issued rwts: total=2377,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:59.235 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:59.235 filename2: (groupid=0, jobs=1): err= 0: pid=102809: Sat Nov 16 16:45:36 2024 00:25:59.235 read: IOPS=240, BW=963KiB/s (986kB/s)(9632KiB/10006msec) 00:25:59.235 slat (usec): min=4, max=8017, avg=20.48, stdev=244.46 00:25:59.235 clat (msec): min=9, max=144, avg=66.36, stdev=20.22 00:25:59.235 lat (msec): min=9, max=144, avg=66.38, stdev=20.22 00:25:59.235 clat percentiles (msec): 00:25:59.235 | 1.00th=[ 26], 5.00th=[ 36], 10.00th=[ 44], 20.00th=[ 51], 00:25:59.235 | 30.00th=[ 58], 40.00th=[ 60], 50.00th=[ 63], 60.00th=[ 69], 00:25:59.235 | 70.00th=[ 73], 80.00th=[ 83], 90.00th=[ 92], 95.00th=[ 106], 00:25:59.235 | 99.00th=[ 130], 99.50th=[ 130], 99.90th=[ 144], 99.95th=[ 144], 00:25:59.235 | 99.99th=[ 144] 00:25:59.235 bw ( KiB/s): min= 728, max= 1280, per=3.84%, avg=959.16, stdev=137.82, samples=19 00:25:59.235 iops : min= 182, max= 320, avg=239.79, stdev=34.45, samples=19 00:25:59.235 lat (msec) : 10=0.08%, 20=0.58%, 50=19.14%, 100=74.63%, 250=5.56% 00:25:59.235 cpu : usr=33.40%, sys=0.45%, ctx=881, majf=0, minf=9 00:25:59.235 IO depths : 1=1.5%, 2=3.3%, 4=10.9%, 8=71.8%, 16=12.6%, 32=0.0%, >=64=0.0% 00:25:59.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.235 complete : 0=0.0%, 4=90.4%, 8=5.4%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.235 issued rwts: total=2408,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:59.235 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:59.235 filename2: (groupid=0, jobs=1): err= 0: pid=102810: Sat Nov 16 16:45:36 2024 00:25:59.235 read: IOPS=243, BW=973KiB/s (996kB/s)(9732KiB/10007msec) 00:25:59.235 slat (nsec): min=4647, max=63999, avg=12706.83, stdev=7993.59 00:25:59.235 clat (msec): min=26, max=125, avg=65.72, stdev=18.25 00:25:59.235 lat (msec): min=26, max=125, avg=65.73, stdev=18.25 00:25:59.235 clat percentiles (msec): 00:25:59.235 | 1.00th=[ 36], 5.00th=[ 37], 10.00th=[ 46], 20.00th=[ 49], 00:25:59.235 | 30.00th=[ 56], 40.00th=[ 60], 50.00th=[ 62], 60.00th=[ 71], 00:25:59.235 | 70.00th=[ 73], 80.00th=[ 82], 90.00th=[ 94], 95.00th=[ 96], 00:25:59.235 | 99.00th=[ 112], 99.50th=[ 122], 99.90th=[ 127], 99.95th=[ 127], 00:25:59.235 | 99.99th=[ 127] 00:25:59.235 bw ( KiB/s): min= 768, max= 1256, per=3.87%, avg=966.80, stdev=113.26, samples=20 00:25:59.235 iops : min= 192, max= 314, avg=241.70, stdev=28.32, samples=20 00:25:59.235 lat (msec) : 50=24.25%, 100=72.22%, 250=3.53% 00:25:59.235 cpu : usr=35.06%, sys=0.60%, ctx=948, majf=0, minf=9 00:25:59.235 IO depths : 1=2.3%, 2=4.8%, 4=13.3%, 8=68.5%, 16=11.1%, 32=0.0%, >=64=0.0% 00:25:59.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.235 complete : 0=0.0%, 4=91.0%, 8=4.2%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.235 issued rwts: total=2433,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:59.235 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:59.235 filename2: (groupid=0, jobs=1): err= 0: pid=102811: Sat Nov 16 16:45:36 2024 00:25:59.235 read: IOPS=255, BW=1022KiB/s (1046kB/s)(10.0MiB/10027msec) 00:25:59.235 slat (usec): min=5, max=7997, avg=19.65, stdev=221.99 00:25:59.235 clat (msec): min=23, max=130, avg=62.48, stdev=18.14 00:25:59.235 lat (msec): min=23, max=130, avg=62.50, stdev=18.14 00:25:59.235 clat percentiles (msec): 00:25:59.235 | 1.00th=[ 29], 5.00th=[ 36], 10.00th=[ 42], 20.00th=[ 48], 00:25:59.235 | 30.00th=[ 52], 40.00th=[ 57], 50.00th=[ 61], 60.00th=[ 65], 00:25:59.235 | 70.00th=[ 71], 80.00th=[ 75], 90.00th=[ 86], 95.00th=[ 97], 00:25:59.235 | 99.00th=[ 120], 99.50th=[ 128], 99.90th=[ 131], 99.95th=[ 131], 00:25:59.235 | 99.99th=[ 131] 00:25:59.235 bw ( KiB/s): min= 784, max= 1328, per=4.07%, avg=1017.50, stdev=153.75, samples=20 00:25:59.235 iops : min= 196, max= 332, avg=254.35, stdev=38.41, samples=20 00:25:59.235 lat (msec) : 50=26.24%, 100=69.70%, 250=4.06% 00:25:59.235 cpu : usr=40.05%, sys=0.61%, ctx=1194, majf=0, minf=9 00:25:59.235 IO depths : 1=1.3%, 2=3.0%, 4=10.4%, 8=72.8%, 16=12.5%, 32=0.0%, >=64=0.0% 00:25:59.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.235 complete : 0=0.0%, 4=90.3%, 8=5.3%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.235 issued rwts: total=2561,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:59.235 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:59.235 filename2: (groupid=0, jobs=1): err= 0: pid=102812: Sat Nov 16 16:45:36 2024 00:25:59.235 read: IOPS=271, BW=1088KiB/s (1114kB/s)(10.7MiB/10037msec) 00:25:59.235 slat (usec): min=3, max=8027, avg=21.14, stdev=216.99 00:25:59.235 clat (msec): min=14, max=133, avg=58.72, stdev=18.35 00:25:59.235 lat (msec): min=14, max=133, avg=58.74, stdev=18.35 00:25:59.235 clat percentiles (msec): 00:25:59.235 | 1.00th=[ 27], 5.00th=[ 35], 10.00th=[ 37], 20.00th=[ 43], 00:25:59.235 | 30.00th=[ 48], 40.00th=[ 54], 50.00th=[ 57], 60.00th=[ 61], 00:25:59.235 | 70.00th=[ 67], 80.00th=[ 72], 90.00th=[ 84], 95.00th=[ 95], 00:25:59.235 | 99.00th=[ 112], 99.50th=[ 121], 99.90th=[ 134], 99.95th=[ 134], 00:25:59.235 | 99.99th=[ 134] 00:25:59.235 bw ( KiB/s): min= 800, max= 1434, per=4.34%, avg=1085.30, stdev=165.00, samples=20 00:25:59.235 iops : min= 200, max= 358, avg=271.30, stdev=41.19, samples=20 00:25:59.235 lat (msec) : 20=0.59%, 50=36.02%, 100=60.68%, 250=2.71% 00:25:59.235 cpu : usr=41.78%, sys=0.65%, ctx=1371, majf=0, minf=9 00:25:59.235 IO depths : 1=0.6%, 2=1.7%, 4=9.4%, 8=75.0%, 16=13.2%, 32=0.0%, >=64=0.0% 00:25:59.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.235 complete : 0=0.0%, 4=89.9%, 8=5.8%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.235 issued rwts: total=2729,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:59.235 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:59.235 filename2: (groupid=0, jobs=1): err= 0: pid=102813: Sat Nov 16 16:45:36 2024 00:25:59.235 read: IOPS=249, BW=1000KiB/s (1024kB/s)(9.79MiB/10029msec) 00:25:59.235 slat (usec): min=6, max=8025, avg=15.72, stdev=160.20 00:25:59.235 clat (msec): min=27, max=131, avg=63.89, stdev=17.00 00:25:59.235 lat (msec): min=27, max=131, avg=63.90, stdev=17.00 00:25:59.235 clat percentiles (msec): 00:25:59.235 | 1.00th=[ 35], 5.00th=[ 37], 10.00th=[ 45], 20.00th=[ 48], 00:25:59.235 | 30.00th=[ 57], 40.00th=[ 60], 50.00th=[ 61], 60.00th=[ 67], 00:25:59.235 | 70.00th=[ 72], 80.00th=[ 75], 90.00th=[ 85], 95.00th=[ 96], 00:25:59.235 | 99.00th=[ 120], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 132], 00:25:59.235 | 99.99th=[ 132] 00:25:59.235 bw ( KiB/s): min= 848, max= 1280, per=3.98%, avg=995.90, stdev=110.57, samples=20 00:25:59.235 iops : min= 212, max= 320, avg=248.95, stdev=27.61, samples=20 00:25:59.235 lat (msec) : 50=23.25%, 100=74.43%, 250=2.31% 00:25:59.235 cpu : usr=32.85%, sys=0.37%, ctx=858, majf=0, minf=9 00:25:59.235 IO depths : 1=1.6%, 2=3.8%, 4=12.6%, 8=70.2%, 16=11.6%, 32=0.0%, >=64=0.0% 00:25:59.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.235 complete : 0=0.0%, 4=90.8%, 8=4.3%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.235 issued rwts: total=2507,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:59.235 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:59.235 filename2: (groupid=0, jobs=1): err= 0: pid=102815: Sat Nov 16 16:45:36 2024 00:25:59.235 read: IOPS=269, BW=1079KiB/s (1104kB/s)(10.6MiB/10024msec) 00:25:59.235 slat (usec): min=5, max=8033, avg=25.60, stdev=298.26 00:25:59.235 clat (msec): min=25, max=133, avg=59.12, stdev=17.37 00:25:59.235 lat (msec): min=25, max=133, avg=59.15, stdev=17.38 00:25:59.235 clat percentiles (msec): 00:25:59.235 | 1.00th=[ 30], 5.00th=[ 35], 10.00th=[ 39], 20.00th=[ 43], 00:25:59.235 | 30.00th=[ 48], 40.00th=[ 55], 50.00th=[ 59], 60.00th=[ 62], 00:25:59.235 | 70.00th=[ 66], 80.00th=[ 74], 90.00th=[ 84], 95.00th=[ 92], 00:25:59.235 | 99.00th=[ 102], 99.50th=[ 105], 99.90th=[ 118], 99.95th=[ 118], 00:25:59.235 | 99.99th=[ 133] 00:25:59.235 bw ( KiB/s): min= 768, max= 1376, per=4.30%, avg=1074.25, stdev=156.33, samples=20 00:25:59.235 iops : min= 192, max= 344, avg=268.55, stdev=39.06, samples=20 00:25:59.235 lat (msec) : 50=34.11%, 100=64.67%, 250=1.22% 00:25:59.235 cpu : usr=41.00%, sys=0.59%, ctx=1200, majf=0, minf=9 00:25:59.235 IO depths : 1=1.2%, 2=2.7%, 4=10.1%, 8=73.9%, 16=12.1%, 32=0.0%, >=64=0.0% 00:25:59.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.235 complete : 0=0.0%, 4=90.1%, 8=5.1%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.235 issued rwts: total=2703,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:59.235 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:59.235 00:25:59.235 Run status group 0 (all jobs): 00:25:59.235 READ: bw=24.4MiB/s (25.6MB/s), 920KiB/s-1239KiB/s (942kB/s-1269kB/s), io=245MiB (257MB), run=10005-10059msec 00:25:59.495 16:45:36 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:25:59.495 16:45:36 -- target/dif.sh@43 -- # local sub 00:25:59.495 16:45:36 -- target/dif.sh@45 -- # for sub in "$@" 00:25:59.495 16:45:36 -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:59.495 16:45:36 -- target/dif.sh@36 -- # local sub_id=0 00:25:59.495 16:45:36 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:59.495 16:45:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.495 16:45:36 -- common/autotest_common.sh@10 -- # set +x 00:25:59.495 16:45:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.495 16:45:36 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:59.495 16:45:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.495 16:45:36 -- common/autotest_common.sh@10 -- # set +x 00:25:59.495 16:45:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.495 16:45:36 -- target/dif.sh@45 -- # for sub in "$@" 00:25:59.495 16:45:36 -- target/dif.sh@46 -- # destroy_subsystem 1 00:25:59.495 16:45:36 -- target/dif.sh@36 -- # local sub_id=1 00:25:59.495 16:45:36 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:59.495 16:45:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.495 16:45:36 -- common/autotest_common.sh@10 -- # set +x 00:25:59.495 16:45:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.495 16:45:36 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:25:59.495 16:45:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.495 16:45:36 -- common/autotest_common.sh@10 -- # set +x 00:25:59.495 16:45:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.495 16:45:36 -- target/dif.sh@45 -- # for sub in "$@" 00:25:59.495 16:45:36 -- target/dif.sh@46 -- # destroy_subsystem 2 00:25:59.495 16:45:36 -- target/dif.sh@36 -- # local sub_id=2 00:25:59.495 16:45:36 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:59.495 16:45:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.495 16:45:36 -- common/autotest_common.sh@10 -- # set +x 00:25:59.495 16:45:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.495 16:45:36 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:25:59.495 16:45:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.495 16:45:36 -- common/autotest_common.sh@10 -- # set +x 00:25:59.495 16:45:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.495 16:45:36 -- target/dif.sh@115 -- # NULL_DIF=1 00:25:59.495 16:45:36 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:25:59.495 16:45:36 -- target/dif.sh@115 -- # numjobs=2 00:25:59.495 16:45:36 -- target/dif.sh@115 -- # iodepth=8 00:25:59.495 16:45:36 -- target/dif.sh@115 -- # runtime=5 00:25:59.495 16:45:36 -- target/dif.sh@115 -- # files=1 00:25:59.495 16:45:36 -- target/dif.sh@117 -- # create_subsystems 0 1 00:25:59.495 16:45:36 -- target/dif.sh@28 -- # local sub 00:25:59.495 16:45:36 -- target/dif.sh@30 -- # for sub in "$@" 00:25:59.495 16:45:36 -- target/dif.sh@31 -- # create_subsystem 0 00:25:59.495 16:45:36 -- target/dif.sh@18 -- # local sub_id=0 00:25:59.495 16:45:36 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:59.495 16:45:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.495 16:45:36 -- common/autotest_common.sh@10 -- # set +x 00:25:59.495 bdev_null0 00:25:59.495 16:45:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.495 16:45:36 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:59.495 16:45:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.495 16:45:36 -- common/autotest_common.sh@10 -- # set +x 00:25:59.495 16:45:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.495 16:45:36 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:59.495 16:45:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.495 16:45:36 -- common/autotest_common.sh@10 -- # set +x 00:25:59.495 16:45:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.495 16:45:36 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:59.495 16:45:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.495 16:45:36 -- common/autotest_common.sh@10 -- # set +x 00:25:59.495 [2024-11-16 16:45:36.880636] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:59.495 16:45:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.495 16:45:36 -- target/dif.sh@30 -- # for sub in "$@" 00:25:59.495 16:45:36 -- target/dif.sh@31 -- # create_subsystem 1 00:25:59.495 16:45:36 -- target/dif.sh@18 -- # local sub_id=1 00:25:59.495 16:45:36 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:25:59.495 16:45:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.495 16:45:36 -- common/autotest_common.sh@10 -- # set +x 00:25:59.495 bdev_null1 00:25:59.495 16:45:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.495 16:45:36 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:59.495 16:45:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.495 16:45:36 -- common/autotest_common.sh@10 -- # set +x 00:25:59.495 16:45:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.495 16:45:36 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:59.495 16:45:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.495 16:45:36 -- common/autotest_common.sh@10 -- # set +x 00:25:59.495 16:45:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.495 16:45:36 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:59.495 16:45:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.495 16:45:36 -- common/autotest_common.sh@10 -- # set +x 00:25:59.496 16:45:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.496 16:45:36 -- target/dif.sh@118 -- # fio /dev/fd/62 00:25:59.496 16:45:36 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:25:59.496 16:45:36 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:25:59.496 16:45:36 -- nvmf/common.sh@520 -- # config=() 00:25:59.496 16:45:36 -- nvmf/common.sh@520 -- # local subsystem config 00:25:59.496 16:45:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:59.496 16:45:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:59.496 { 00:25:59.496 "params": { 00:25:59.496 "name": "Nvme$subsystem", 00:25:59.496 "trtype": "$TEST_TRANSPORT", 00:25:59.496 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:59.496 "adrfam": "ipv4", 00:25:59.496 "trsvcid": "$NVMF_PORT", 00:25:59.496 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:59.496 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:59.496 "hdgst": ${hdgst:-false}, 00:25:59.496 "ddgst": ${ddgst:-false} 00:25:59.496 }, 00:25:59.496 "method": "bdev_nvme_attach_controller" 00:25:59.496 } 00:25:59.496 EOF 00:25:59.496 )") 00:25:59.496 16:45:36 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:59.496 16:45:36 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:59.496 16:45:36 -- target/dif.sh@82 -- # gen_fio_conf 00:25:59.496 16:45:36 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:25:59.496 16:45:36 -- target/dif.sh@54 -- # local file 00:25:59.496 16:45:36 -- target/dif.sh@56 -- # cat 00:25:59.496 16:45:36 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:59.496 16:45:36 -- common/autotest_common.sh@1328 -- # local sanitizers 00:25:59.496 16:45:36 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:59.496 16:45:36 -- common/autotest_common.sh@1330 -- # shift 00:25:59.496 16:45:36 -- nvmf/common.sh@542 -- # cat 00:25:59.496 16:45:36 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:25:59.496 16:45:36 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:59.496 16:45:36 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:59.496 16:45:36 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:59.496 16:45:36 -- target/dif.sh@72 -- # (( file <= files )) 00:25:59.496 16:45:36 -- common/autotest_common.sh@1334 -- # grep libasan 00:25:59.496 16:45:36 -- target/dif.sh@73 -- # cat 00:25:59.496 16:45:36 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:59.496 16:45:36 -- target/dif.sh@72 -- # (( file++ )) 00:25:59.496 16:45:36 -- target/dif.sh@72 -- # (( file <= files )) 00:25:59.496 16:45:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:59.496 16:45:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:59.496 { 00:25:59.496 "params": { 00:25:59.496 "name": "Nvme$subsystem", 00:25:59.496 "trtype": "$TEST_TRANSPORT", 00:25:59.496 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:59.496 "adrfam": "ipv4", 00:25:59.496 "trsvcid": "$NVMF_PORT", 00:25:59.496 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:59.496 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:59.496 "hdgst": ${hdgst:-false}, 00:25:59.496 "ddgst": ${ddgst:-false} 00:25:59.496 }, 00:25:59.496 "method": "bdev_nvme_attach_controller" 00:25:59.496 } 00:25:59.496 EOF 00:25:59.496 )") 00:25:59.496 16:45:36 -- nvmf/common.sh@542 -- # cat 00:25:59.496 16:45:36 -- nvmf/common.sh@544 -- # jq . 00:25:59.496 16:45:36 -- nvmf/common.sh@545 -- # IFS=, 00:25:59.496 16:45:36 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:59.496 "params": { 00:25:59.496 "name": "Nvme0", 00:25:59.496 "trtype": "tcp", 00:25:59.496 "traddr": "10.0.0.2", 00:25:59.496 "adrfam": "ipv4", 00:25:59.496 "trsvcid": "4420", 00:25:59.496 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:59.496 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:59.496 "hdgst": false, 00:25:59.496 "ddgst": false 00:25:59.496 }, 00:25:59.496 "method": "bdev_nvme_attach_controller" 00:25:59.496 },{ 00:25:59.496 "params": { 00:25:59.496 "name": "Nvme1", 00:25:59.496 "trtype": "tcp", 00:25:59.496 "traddr": "10.0.0.2", 00:25:59.496 "adrfam": "ipv4", 00:25:59.496 "trsvcid": "4420", 00:25:59.496 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:59.496 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:59.496 "hdgst": false, 00:25:59.496 "ddgst": false 00:25:59.496 }, 00:25:59.496 "method": "bdev_nvme_attach_controller" 00:25:59.496 }' 00:25:59.496 16:45:36 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:59.496 16:45:36 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:59.496 16:45:36 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:59.496 16:45:36 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:59.496 16:45:36 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:59.496 16:45:36 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:25:59.496 16:45:36 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:59.496 16:45:36 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:59.496 16:45:36 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:59.496 16:45:36 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:59.755 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:25:59.755 ... 00:25:59.755 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:25:59.755 ... 00:25:59.755 fio-3.35 00:25:59.755 Starting 4 threads 00:26:00.322 [2024-11-16 16:45:37.618601] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:00.322 [2024-11-16 16:45:37.618668] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:05.593 00:26:05.593 filename0: (groupid=0, jobs=1): err= 0: pid=102942: Sat Nov 16 16:45:42 2024 00:26:05.593 read: IOPS=2289, BW=17.9MiB/s (18.8MB/s)(89.5MiB/5001msec) 00:26:05.593 slat (nsec): min=6068, max=74817, avg=10471.06, stdev=6988.21 00:26:05.593 clat (usec): min=862, max=5813, avg=3442.13, stdev=162.17 00:26:05.593 lat (usec): min=869, max=5819, avg=3452.60, stdev=162.27 00:26:05.593 clat percentiles (usec): 00:26:05.593 | 1.00th=[ 3163], 5.00th=[ 3261], 10.00th=[ 3294], 20.00th=[ 3326], 00:26:05.593 | 30.00th=[ 3359], 40.00th=[ 3392], 50.00th=[ 3425], 60.00th=[ 3458], 00:26:05.593 | 70.00th=[ 3490], 80.00th=[ 3523], 90.00th=[ 3621], 95.00th=[ 3720], 00:26:05.593 | 99.00th=[ 3949], 99.50th=[ 4080], 99.90th=[ 4752], 99.95th=[ 4883], 00:26:05.593 | 99.99th=[ 5473] 00:26:05.593 bw ( KiB/s): min=17792, max=19072, per=25.04%, avg=18337.78, stdev=373.90, samples=9 00:26:05.593 iops : min= 2224, max= 2384, avg=2292.22, stdev=46.74, samples=9 00:26:05.593 lat (usec) : 1000=0.03% 00:26:05.593 lat (msec) : 2=0.01%, 4=99.25%, 10=0.72% 00:26:05.593 cpu : usr=95.52%, sys=3.34%, ctx=5, majf=0, minf=0 00:26:05.593 IO depths : 1=11.0%, 2=24.5%, 4=50.5%, 8=14.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:05.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.593 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.593 issued rwts: total=11451,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.593 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:05.593 filename0: (groupid=0, jobs=1): err= 0: pid=102943: Sat Nov 16 16:45:42 2024 00:26:05.593 read: IOPS=2287, BW=17.9MiB/s (18.7MB/s)(89.4MiB/5001msec) 00:26:05.593 slat (usec): min=5, max=107, avg=18.45, stdev=10.77 00:26:05.593 clat (usec): min=1430, max=6527, avg=3403.64, stdev=214.12 00:26:05.593 lat (usec): min=1440, max=6538, avg=3422.09, stdev=214.72 00:26:05.593 clat percentiles (usec): 00:26:05.593 | 1.00th=[ 3064], 5.00th=[ 3195], 10.00th=[ 3228], 20.00th=[ 3294], 00:26:05.593 | 30.00th=[ 3326], 40.00th=[ 3359], 50.00th=[ 3392], 60.00th=[ 3425], 00:26:05.593 | 70.00th=[ 3458], 80.00th=[ 3490], 90.00th=[ 3589], 95.00th=[ 3687], 00:26:05.593 | 99.00th=[ 4015], 99.50th=[ 4359], 99.90th=[ 5407], 99.95th=[ 5669], 00:26:05.593 | 99.99th=[ 6063] 00:26:05.593 bw ( KiB/s): min=17792, max=18944, per=25.00%, avg=18304.00, stdev=371.20, samples=9 00:26:05.593 iops : min= 2224, max= 2368, avg=2288.00, stdev=46.40, samples=9 00:26:05.593 lat (msec) : 2=0.12%, 4=98.78%, 10=1.09% 00:26:05.593 cpu : usr=94.84%, sys=3.86%, ctx=61, majf=0, minf=9 00:26:05.593 IO depths : 1=7.6%, 2=25.0%, 4=50.0%, 8=17.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:05.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.593 complete : 0=0.0%, 4=89.3%, 8=10.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.593 issued rwts: total=11440,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.593 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:05.593 filename1: (groupid=0, jobs=1): err= 0: pid=102944: Sat Nov 16 16:45:42 2024 00:26:05.593 read: IOPS=2289, BW=17.9MiB/s (18.8MB/s)(89.5MiB/5003msec) 00:26:05.593 slat (nsec): min=3312, max=80241, avg=13244.45, stdev=8172.82 00:26:05.593 clat (usec): min=1040, max=5637, avg=3436.21, stdev=237.65 00:26:05.593 lat (usec): min=1047, max=5660, avg=3449.45, stdev=237.46 00:26:05.593 clat percentiles (usec): 00:26:05.593 | 1.00th=[ 2966], 5.00th=[ 3228], 10.00th=[ 3261], 20.00th=[ 3326], 00:26:05.593 | 30.00th=[ 3359], 40.00th=[ 3392], 50.00th=[ 3425], 60.00th=[ 3458], 00:26:05.593 | 70.00th=[ 3490], 80.00th=[ 3523], 90.00th=[ 3621], 95.00th=[ 3752], 00:26:05.593 | 99.00th=[ 4146], 99.50th=[ 4424], 99.90th=[ 5080], 99.95th=[ 5473], 00:26:05.593 | 99.99th=[ 5604] 00:26:05.593 bw ( KiB/s): min=17712, max=19072, per=25.04%, avg=18332.44, stdev=445.33, samples=9 00:26:05.593 iops : min= 2214, max= 2384, avg=2291.56, stdev=55.67, samples=9 00:26:05.593 lat (msec) : 2=0.43%, 4=98.04%, 10=1.53% 00:26:05.593 cpu : usr=93.38%, sys=5.02%, ctx=144, majf=0, minf=0 00:26:05.593 IO depths : 1=8.3%, 2=20.9%, 4=54.1%, 8=16.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:05.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.593 complete : 0=0.0%, 4=89.3%, 8=10.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.593 issued rwts: total=11456,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.593 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:05.593 filename1: (groupid=0, jobs=1): err= 0: pid=102945: Sat Nov 16 16:45:42 2024 00:26:05.593 read: IOPS=2288, BW=17.9MiB/s (18.7MB/s)(89.4MiB/5002msec) 00:26:05.593 slat (usec): min=5, max=107, avg=18.59, stdev=10.71 00:26:05.593 clat (usec): min=566, max=6135, avg=3398.83, stdev=202.79 00:26:05.593 lat (usec): min=573, max=6161, avg=3417.42, stdev=203.39 00:26:05.593 clat percentiles (usec): 00:26:05.593 | 1.00th=[ 3097], 5.00th=[ 3195], 10.00th=[ 3228], 20.00th=[ 3294], 00:26:05.593 | 30.00th=[ 3326], 40.00th=[ 3359], 50.00th=[ 3392], 60.00th=[ 3425], 00:26:05.593 | 70.00th=[ 3458], 80.00th=[ 3490], 90.00th=[ 3589], 95.00th=[ 3687], 00:26:05.593 | 99.00th=[ 3949], 99.50th=[ 4228], 99.90th=[ 5080], 99.95th=[ 6063], 00:26:05.593 | 99.99th=[ 6128] 00:26:05.593 bw ( KiB/s): min=17792, max=19072, per=25.02%, avg=18318.78, stdev=387.50, samples=9 00:26:05.593 iops : min= 2224, max= 2384, avg=2289.78, stdev=48.39, samples=9 00:26:05.593 lat (usec) : 750=0.03%, 1000=0.03% 00:26:05.593 lat (msec) : 2=0.07%, 4=99.04%, 10=0.84% 00:26:05.593 cpu : usr=95.70%, sys=3.04%, ctx=8, majf=0, minf=9 00:26:05.593 IO depths : 1=10.7%, 2=24.8%, 4=50.2%, 8=14.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:05.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.593 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.593 issued rwts: total=11446,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.593 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:05.593 00:26:05.593 Run status group 0 (all jobs): 00:26:05.593 READ: bw=71.5MiB/s (75.0MB/s), 17.9MiB/s-17.9MiB/s (18.7MB/s-18.8MB/s), io=358MiB (375MB), run=5001-5003msec 00:26:05.593 16:45:42 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:26:05.593 16:45:42 -- target/dif.sh@43 -- # local sub 00:26:05.593 16:45:42 -- target/dif.sh@45 -- # for sub in "$@" 00:26:05.593 16:45:42 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:05.593 16:45:42 -- target/dif.sh@36 -- # local sub_id=0 00:26:05.593 16:45:42 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:05.593 16:45:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.593 16:45:42 -- common/autotest_common.sh@10 -- # set +x 00:26:05.593 16:45:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.593 16:45:42 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:05.593 16:45:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.593 16:45:42 -- common/autotest_common.sh@10 -- # set +x 00:26:05.593 16:45:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.593 16:45:42 -- target/dif.sh@45 -- # for sub in "$@" 00:26:05.593 16:45:42 -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:05.593 16:45:42 -- target/dif.sh@36 -- # local sub_id=1 00:26:05.593 16:45:42 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:05.593 16:45:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.593 16:45:42 -- common/autotest_common.sh@10 -- # set +x 00:26:05.593 16:45:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.593 16:45:43 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:05.593 16:45:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.593 16:45:43 -- common/autotest_common.sh@10 -- # set +x 00:26:05.593 ************************************ 00:26:05.593 END TEST fio_dif_rand_params 00:26:05.593 ************************************ 00:26:05.593 16:45:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.593 00:26:05.593 real 0m23.633s 00:26:05.593 user 2m7.762s 00:26:05.593 sys 0m3.731s 00:26:05.593 16:45:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:05.593 16:45:43 -- common/autotest_common.sh@10 -- # set +x 00:26:05.593 16:45:43 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:26:05.593 16:45:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:05.593 16:45:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:05.593 16:45:43 -- common/autotest_common.sh@10 -- # set +x 00:26:05.593 ************************************ 00:26:05.593 START TEST fio_dif_digest 00:26:05.593 ************************************ 00:26:05.593 16:45:43 -- common/autotest_common.sh@1114 -- # fio_dif_digest 00:26:05.593 16:45:43 -- target/dif.sh@123 -- # local NULL_DIF 00:26:05.593 16:45:43 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:26:05.593 16:45:43 -- target/dif.sh@125 -- # local hdgst ddgst 00:26:05.593 16:45:43 -- target/dif.sh@127 -- # NULL_DIF=3 00:26:05.593 16:45:43 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:26:05.593 16:45:43 -- target/dif.sh@127 -- # numjobs=3 00:26:05.594 16:45:43 -- target/dif.sh@127 -- # iodepth=3 00:26:05.594 16:45:43 -- target/dif.sh@127 -- # runtime=10 00:26:05.594 16:45:43 -- target/dif.sh@128 -- # hdgst=true 00:26:05.594 16:45:43 -- target/dif.sh@128 -- # ddgst=true 00:26:05.594 16:45:43 -- target/dif.sh@130 -- # create_subsystems 0 00:26:05.594 16:45:43 -- target/dif.sh@28 -- # local sub 00:26:05.594 16:45:43 -- target/dif.sh@30 -- # for sub in "$@" 00:26:05.594 16:45:43 -- target/dif.sh@31 -- # create_subsystem 0 00:26:05.594 16:45:43 -- target/dif.sh@18 -- # local sub_id=0 00:26:05.594 16:45:43 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:05.594 16:45:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.594 16:45:43 -- common/autotest_common.sh@10 -- # set +x 00:26:05.852 bdev_null0 00:26:05.852 16:45:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.852 16:45:43 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:05.852 16:45:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.852 16:45:43 -- common/autotest_common.sh@10 -- # set +x 00:26:05.852 16:45:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.852 16:45:43 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:05.852 16:45:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.852 16:45:43 -- common/autotest_common.sh@10 -- # set +x 00:26:05.852 16:45:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.852 16:45:43 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:05.852 16:45:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.852 16:45:43 -- common/autotest_common.sh@10 -- # set +x 00:26:05.852 [2024-11-16 16:45:43.110806] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:05.852 16:45:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.852 16:45:43 -- target/dif.sh@131 -- # fio /dev/fd/62 00:26:05.852 16:45:43 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:26:05.852 16:45:43 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:05.852 16:45:43 -- nvmf/common.sh@520 -- # config=() 00:26:05.852 16:45:43 -- nvmf/common.sh@520 -- # local subsystem config 00:26:05.852 16:45:43 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:05.853 16:45:43 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:05.853 { 00:26:05.853 "params": { 00:26:05.853 "name": "Nvme$subsystem", 00:26:05.853 "trtype": "$TEST_TRANSPORT", 00:26:05.853 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:05.853 "adrfam": "ipv4", 00:26:05.853 "trsvcid": "$NVMF_PORT", 00:26:05.853 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:05.853 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:05.853 "hdgst": ${hdgst:-false}, 00:26:05.853 "ddgst": ${ddgst:-false} 00:26:05.853 }, 00:26:05.853 "method": "bdev_nvme_attach_controller" 00:26:05.853 } 00:26:05.853 EOF 00:26:05.853 )") 00:26:05.853 16:45:43 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:05.853 16:45:43 -- target/dif.sh@82 -- # gen_fio_conf 00:26:05.853 16:45:43 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:05.853 16:45:43 -- target/dif.sh@54 -- # local file 00:26:05.853 16:45:43 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:26:05.853 16:45:43 -- target/dif.sh@56 -- # cat 00:26:05.853 16:45:43 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:05.853 16:45:43 -- common/autotest_common.sh@1328 -- # local sanitizers 00:26:05.853 16:45:43 -- nvmf/common.sh@542 -- # cat 00:26:05.853 16:45:43 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:05.853 16:45:43 -- common/autotest_common.sh@1330 -- # shift 00:26:05.853 16:45:43 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:26:05.853 16:45:43 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:05.853 16:45:43 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:05.853 16:45:43 -- target/dif.sh@72 -- # (( file <= files )) 00:26:05.853 16:45:43 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:05.853 16:45:43 -- nvmf/common.sh@544 -- # jq . 00:26:05.853 16:45:43 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:05.853 16:45:43 -- common/autotest_common.sh@1334 -- # grep libasan 00:26:05.853 16:45:43 -- nvmf/common.sh@545 -- # IFS=, 00:26:05.853 16:45:43 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:05.853 "params": { 00:26:05.853 "name": "Nvme0", 00:26:05.853 "trtype": "tcp", 00:26:05.853 "traddr": "10.0.0.2", 00:26:05.853 "adrfam": "ipv4", 00:26:05.853 "trsvcid": "4420", 00:26:05.853 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:05.853 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:05.853 "hdgst": true, 00:26:05.853 "ddgst": true 00:26:05.853 }, 00:26:05.853 "method": "bdev_nvme_attach_controller" 00:26:05.853 }' 00:26:05.853 16:45:43 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:05.853 16:45:43 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:05.853 16:45:43 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:05.853 16:45:43 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:05.853 16:45:43 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:26:05.853 16:45:43 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:05.853 16:45:43 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:05.853 16:45:43 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:05.853 16:45:43 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:05.853 16:45:43 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:05.853 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:05.853 ... 00:26:05.853 fio-3.35 00:26:05.853 Starting 3 threads 00:26:06.421 [2024-11-16 16:45:43.687744] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:06.421 [2024-11-16 16:45:43.687828] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:16.407 00:26:16.407 filename0: (groupid=0, jobs=1): err= 0: pid=103051: Sat Nov 16 16:45:53 2024 00:26:16.407 read: IOPS=265, BW=33.2MiB/s (34.8MB/s)(333MiB/10005msec) 00:26:16.407 slat (nsec): min=6220, max=61370, avg=14910.02, stdev=6616.15 00:26:16.407 clat (usec): min=5632, max=53183, avg=11264.72, stdev=2188.79 00:26:16.407 lat (usec): min=5642, max=53194, avg=11279.63, stdev=2189.56 00:26:16.407 clat percentiles (usec): 00:26:16.407 | 1.00th=[ 6325], 5.00th=[ 7177], 10.00th=[ 8029], 20.00th=[10421], 00:26:16.407 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11600], 60.00th=[11863], 00:26:16.407 | 70.00th=[12125], 80.00th=[12518], 90.00th=[12911], 95.00th=[13173], 00:26:16.407 | 99.00th=[13960], 99.50th=[14222], 99.90th=[51643], 99.95th=[52691], 00:26:16.407 | 99.99th=[53216] 00:26:16.407 bw ( KiB/s): min=30464, max=36864, per=34.05%, avg=34101.89, stdev=1712.16, samples=19 00:26:16.407 iops : min= 238, max= 288, avg=266.42, stdev=13.38, samples=19 00:26:16.407 lat (msec) : 10=14.81%, 20=85.08%, 100=0.11% 00:26:16.407 cpu : usr=93.97%, sys=4.31%, ctx=14, majf=0, minf=9 00:26:16.407 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:16.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:16.407 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:16.407 issued rwts: total=2660,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:16.407 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:16.407 filename0: (groupid=0, jobs=1): err= 0: pid=103052: Sat Nov 16 16:45:53 2024 00:26:16.407 read: IOPS=235, BW=29.4MiB/s (30.8MB/s)(295MiB/10045msec) 00:26:16.407 slat (nsec): min=6064, max=59640, avg=11792.19, stdev=6447.57 00:26:16.407 clat (usec): min=7199, max=52502, avg=12717.03, stdev=1883.13 00:26:16.408 lat (usec): min=7217, max=52524, avg=12728.82, stdev=1883.90 00:26:16.408 clat percentiles (usec): 00:26:16.408 | 1.00th=[ 8029], 5.00th=[ 8586], 10.00th=[ 9896], 20.00th=[12387], 00:26:16.408 | 30.00th=[12649], 40.00th=[12780], 50.00th=[13042], 60.00th=[13173], 00:26:16.408 | 70.00th=[13435], 80.00th=[13698], 90.00th=[13960], 95.00th=[14353], 00:26:16.408 | 99.00th=[15139], 99.50th=[15795], 99.90th=[16712], 99.95th=[44827], 00:26:16.408 | 99.99th=[52691] 00:26:16.408 bw ( KiB/s): min=27868, max=33792, per=30.11%, avg=30158.00, stdev=1528.54, samples=20 00:26:16.408 iops : min= 217, max= 264, avg=235.55, stdev=12.00, samples=20 00:26:16.408 lat (msec) : 10=10.37%, 20=89.54%, 50=0.04%, 100=0.04% 00:26:16.408 cpu : usr=94.25%, sys=4.29%, ctx=9, majf=0, minf=9 00:26:16.408 IO depths : 1=22.7%, 2=77.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:16.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:16.408 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:16.408 issued rwts: total=2362,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:16.408 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:16.408 filename0: (groupid=0, jobs=1): err= 0: pid=103053: Sat Nov 16 16:45:53 2024 00:26:16.408 read: IOPS=283, BW=35.4MiB/s (37.2MB/s)(355MiB/10007msec) 00:26:16.408 slat (nsec): min=3876, max=89836, avg=17244.77, stdev=7366.40 00:26:16.408 clat (usec): min=6117, max=52466, avg=10559.41, stdev=5627.24 00:26:16.408 lat (usec): min=6135, max=52486, avg=10576.65, stdev=5627.29 00:26:16.408 clat percentiles (usec): 00:26:16.408 | 1.00th=[ 7832], 5.00th=[ 8455], 10.00th=[ 8717], 20.00th=[ 9110], 00:26:16.408 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[ 9896], 60.00th=[10028], 00:26:16.408 | 70.00th=[10290], 80.00th=[10421], 90.00th=[10814], 95.00th=[11207], 00:26:16.408 | 99.00th=[50594], 99.50th=[51119], 99.90th=[52167], 99.95th=[52167], 00:26:16.408 | 99.99th=[52691] 00:26:16.408 bw ( KiB/s): min=30720, max=39936, per=36.11%, avg=36159.63, stdev=2756.92, samples=19 00:26:16.408 iops : min= 240, max= 312, avg=282.47, stdev=21.55, samples=19 00:26:16.408 lat (msec) : 10=58.41%, 20=39.69%, 50=0.56%, 100=1.34% 00:26:16.408 cpu : usr=93.31%, sys=4.76%, ctx=16, majf=0, minf=9 00:26:16.408 IO depths : 1=2.5%, 2=97.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:16.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:16.408 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:16.408 issued rwts: total=2837,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:16.408 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:16.408 00:26:16.408 Run status group 0 (all jobs): 00:26:16.408 READ: bw=97.8MiB/s (103MB/s), 29.4MiB/s-35.4MiB/s (30.8MB/s-37.2MB/s), io=982MiB (1030MB), run=10005-10045msec 00:26:16.667 16:45:54 -- target/dif.sh@132 -- # destroy_subsystems 0 00:26:16.667 16:45:54 -- target/dif.sh@43 -- # local sub 00:26:16.667 16:45:54 -- target/dif.sh@45 -- # for sub in "$@" 00:26:16.667 16:45:54 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:16.667 16:45:54 -- target/dif.sh@36 -- # local sub_id=0 00:26:16.667 16:45:54 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:16.667 16:45:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.667 16:45:54 -- common/autotest_common.sh@10 -- # set +x 00:26:16.667 16:45:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.667 16:45:54 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:16.667 16:45:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.667 16:45:54 -- common/autotest_common.sh@10 -- # set +x 00:26:16.667 ************************************ 00:26:16.667 END TEST fio_dif_digest 00:26:16.667 ************************************ 00:26:16.667 16:45:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.667 00:26:16.667 real 0m11.026s 00:26:16.667 user 0m28.847s 00:26:16.667 sys 0m1.622s 00:26:16.667 16:45:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:16.667 16:45:54 -- common/autotest_common.sh@10 -- # set +x 00:26:16.667 16:45:54 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:26:16.667 16:45:54 -- target/dif.sh@147 -- # nvmftestfini 00:26:16.667 16:45:54 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:16.667 16:45:54 -- nvmf/common.sh@116 -- # sync 00:26:16.926 16:45:54 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:16.926 16:45:54 -- nvmf/common.sh@119 -- # set +e 00:26:16.926 16:45:54 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:16.926 16:45:54 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:16.926 rmmod nvme_tcp 00:26:16.926 rmmod nvme_fabrics 00:26:16.926 rmmod nvme_keyring 00:26:16.926 16:45:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:16.926 16:45:54 -- nvmf/common.sh@123 -- # set -e 00:26:16.926 16:45:54 -- nvmf/common.sh@124 -- # return 0 00:26:16.926 16:45:54 -- nvmf/common.sh@477 -- # '[' -n 102285 ']' 00:26:16.926 16:45:54 -- nvmf/common.sh@478 -- # killprocess 102285 00:26:16.926 16:45:54 -- common/autotest_common.sh@936 -- # '[' -z 102285 ']' 00:26:16.926 16:45:54 -- common/autotest_common.sh@940 -- # kill -0 102285 00:26:16.926 16:45:54 -- common/autotest_common.sh@941 -- # uname 00:26:16.926 16:45:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:16.926 16:45:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 102285 00:26:16.926 16:45:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:16.926 16:45:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:16.926 16:45:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 102285' 00:26:16.926 killing process with pid 102285 00:26:16.926 16:45:54 -- common/autotest_common.sh@955 -- # kill 102285 00:26:16.926 16:45:54 -- common/autotest_common.sh@960 -- # wait 102285 00:26:17.185 16:45:54 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:26:17.185 16:45:54 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:17.444 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:17.703 Waiting for block devices as requested 00:26:17.703 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:26:17.703 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:26:17.703 16:45:55 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:17.703 16:45:55 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:17.703 16:45:55 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:17.703 16:45:55 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:17.703 16:45:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:17.703 16:45:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:17.703 16:45:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:17.703 16:45:55 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:26:17.703 00:26:17.703 real 1m0.248s 00:26:17.703 user 3m52.782s 00:26:17.703 sys 0m13.726s 00:26:17.703 ************************************ 00:26:17.703 END TEST nvmf_dif 00:26:17.703 ************************************ 00:26:17.703 16:45:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:17.703 16:45:55 -- common/autotest_common.sh@10 -- # set +x 00:26:17.963 16:45:55 -- spdk/autotest.sh@288 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:17.963 16:45:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:17.963 16:45:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:17.963 16:45:55 -- common/autotest_common.sh@10 -- # set +x 00:26:17.963 ************************************ 00:26:17.963 START TEST nvmf_abort_qd_sizes 00:26:17.963 ************************************ 00:26:17.963 16:45:55 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:17.963 * Looking for test storage... 00:26:17.963 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:17.963 16:45:55 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:26:17.963 16:45:55 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:26:17.963 16:45:55 -- common/autotest_common.sh@1690 -- # lcov --version 00:26:17.963 16:45:55 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:26:17.963 16:45:55 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:26:17.963 16:45:55 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:26:17.963 16:45:55 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:26:17.963 16:45:55 -- scripts/common.sh@335 -- # IFS=.-: 00:26:17.963 16:45:55 -- scripts/common.sh@335 -- # read -ra ver1 00:26:17.963 16:45:55 -- scripts/common.sh@336 -- # IFS=.-: 00:26:17.963 16:45:55 -- scripts/common.sh@336 -- # read -ra ver2 00:26:17.963 16:45:55 -- scripts/common.sh@337 -- # local 'op=<' 00:26:17.963 16:45:55 -- scripts/common.sh@339 -- # ver1_l=2 00:26:17.963 16:45:55 -- scripts/common.sh@340 -- # ver2_l=1 00:26:17.963 16:45:55 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:26:17.963 16:45:55 -- scripts/common.sh@343 -- # case "$op" in 00:26:17.963 16:45:55 -- scripts/common.sh@344 -- # : 1 00:26:17.963 16:45:55 -- scripts/common.sh@363 -- # (( v = 0 )) 00:26:17.963 16:45:55 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:17.963 16:45:55 -- scripts/common.sh@364 -- # decimal 1 00:26:17.963 16:45:55 -- scripts/common.sh@352 -- # local d=1 00:26:17.963 16:45:55 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:17.963 16:45:55 -- scripts/common.sh@354 -- # echo 1 00:26:17.963 16:45:55 -- scripts/common.sh@364 -- # ver1[v]=1 00:26:17.963 16:45:55 -- scripts/common.sh@365 -- # decimal 2 00:26:17.963 16:45:55 -- scripts/common.sh@352 -- # local d=2 00:26:17.963 16:45:55 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:17.963 16:45:55 -- scripts/common.sh@354 -- # echo 2 00:26:17.963 16:45:55 -- scripts/common.sh@365 -- # ver2[v]=2 00:26:17.963 16:45:55 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:26:17.963 16:45:55 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:26:17.963 16:45:55 -- scripts/common.sh@367 -- # return 0 00:26:17.963 16:45:55 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:17.963 16:45:55 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:26:17.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:17.963 --rc genhtml_branch_coverage=1 00:26:17.963 --rc genhtml_function_coverage=1 00:26:17.963 --rc genhtml_legend=1 00:26:17.963 --rc geninfo_all_blocks=1 00:26:17.963 --rc geninfo_unexecuted_blocks=1 00:26:17.963 00:26:17.963 ' 00:26:17.963 16:45:55 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:26:17.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:17.963 --rc genhtml_branch_coverage=1 00:26:17.963 --rc genhtml_function_coverage=1 00:26:17.963 --rc genhtml_legend=1 00:26:17.963 --rc geninfo_all_blocks=1 00:26:17.963 --rc geninfo_unexecuted_blocks=1 00:26:17.963 00:26:17.963 ' 00:26:17.963 16:45:55 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:26:17.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:17.963 --rc genhtml_branch_coverage=1 00:26:17.963 --rc genhtml_function_coverage=1 00:26:17.963 --rc genhtml_legend=1 00:26:17.963 --rc geninfo_all_blocks=1 00:26:17.963 --rc geninfo_unexecuted_blocks=1 00:26:17.963 00:26:17.963 ' 00:26:17.963 16:45:55 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:26:17.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:17.963 --rc genhtml_branch_coverage=1 00:26:17.963 --rc genhtml_function_coverage=1 00:26:17.963 --rc genhtml_legend=1 00:26:17.963 --rc geninfo_all_blocks=1 00:26:17.963 --rc geninfo_unexecuted_blocks=1 00:26:17.963 00:26:17.963 ' 00:26:17.963 16:45:55 -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:17.963 16:45:55 -- nvmf/common.sh@7 -- # uname -s 00:26:17.963 16:45:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:17.963 16:45:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:17.963 16:45:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:17.963 16:45:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:17.963 16:45:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:17.963 16:45:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:17.963 16:45:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:17.963 16:45:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:17.963 16:45:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:17.963 16:45:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:17.963 16:45:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:26:17.963 16:45:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=dcaf3c85-349e-474a-91c8-b5dfcb47b007 00:26:17.963 16:45:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:17.963 16:45:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:17.963 16:45:55 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:17.963 16:45:55 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:17.963 16:45:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:17.963 16:45:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:17.963 16:45:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:17.963 16:45:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.963 16:45:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.963 16:45:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:18.222 16:45:55 -- paths/export.sh@5 -- # export PATH 00:26:18.222 16:45:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:18.222 16:45:55 -- nvmf/common.sh@46 -- # : 0 00:26:18.222 16:45:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:18.222 16:45:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:18.222 16:45:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:18.222 16:45:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:18.222 16:45:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:18.222 16:45:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:18.222 16:45:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:18.222 16:45:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:18.222 16:45:55 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:26:18.222 16:45:55 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:18.222 16:45:55 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:18.222 16:45:55 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:18.222 16:45:55 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:18.222 16:45:55 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:18.222 16:45:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:18.222 16:45:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:18.222 16:45:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:18.222 16:45:55 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:26:18.222 16:45:55 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:26:18.222 16:45:55 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:26:18.222 16:45:55 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:26:18.222 16:45:55 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:26:18.222 16:45:55 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:26:18.222 16:45:55 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:18.222 16:45:55 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:18.222 16:45:55 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:18.222 16:45:55 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:26:18.222 16:45:55 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:18.222 16:45:55 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:18.222 16:45:55 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:18.223 16:45:55 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:18.223 16:45:55 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:18.223 16:45:55 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:18.223 16:45:55 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:18.223 16:45:55 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:18.223 16:45:55 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:26:18.223 16:45:55 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:26:18.223 Cannot find device "nvmf_tgt_br" 00:26:18.223 16:45:55 -- nvmf/common.sh@154 -- # true 00:26:18.223 16:45:55 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:26:18.223 Cannot find device "nvmf_tgt_br2" 00:26:18.223 16:45:55 -- nvmf/common.sh@155 -- # true 00:26:18.223 16:45:55 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:26:18.223 16:45:55 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:26:18.223 Cannot find device "nvmf_tgt_br" 00:26:18.223 16:45:55 -- nvmf/common.sh@157 -- # true 00:26:18.223 16:45:55 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:26:18.223 Cannot find device "nvmf_tgt_br2" 00:26:18.223 16:45:55 -- nvmf/common.sh@158 -- # true 00:26:18.223 16:45:55 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:26:18.223 16:45:55 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:26:18.223 16:45:55 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:18.223 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:18.223 16:45:55 -- nvmf/common.sh@161 -- # true 00:26:18.223 16:45:55 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:18.223 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:18.223 16:45:55 -- nvmf/common.sh@162 -- # true 00:26:18.223 16:45:55 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:26:18.223 16:45:55 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:18.223 16:45:55 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:18.223 16:45:55 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:18.223 16:45:55 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:18.223 16:45:55 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:18.223 16:45:55 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:18.223 16:45:55 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:18.223 16:45:55 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:18.223 16:45:55 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:26:18.223 16:45:55 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:26:18.223 16:45:55 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:26:18.223 16:45:55 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:26:18.481 16:45:55 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:18.481 16:45:55 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:18.481 16:45:55 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:18.481 16:45:55 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:26:18.481 16:45:55 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:26:18.481 16:45:55 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:26:18.481 16:45:55 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:18.481 16:45:55 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:18.481 16:45:55 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:18.481 16:45:55 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:18.481 16:45:55 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:26:18.482 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:18.482 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.381 ms 00:26:18.482 00:26:18.482 --- 10.0.0.2 ping statistics --- 00:26:18.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:18.482 rtt min/avg/max/mdev = 0.381/0.381/0.381/0.000 ms 00:26:18.482 16:45:55 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:26:18.482 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:18.482 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:26:18.482 00:26:18.482 --- 10.0.0.3 ping statistics --- 00:26:18.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:18.482 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:26:18.482 16:45:55 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:18.482 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:18.482 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:26:18.482 00:26:18.482 --- 10.0.0.1 ping statistics --- 00:26:18.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:18.482 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:26:18.482 16:45:55 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:18.482 16:45:55 -- nvmf/common.sh@421 -- # return 0 00:26:18.482 16:45:55 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:26:18.482 16:45:55 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:19.048 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:19.307 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:26:19.307 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:26:19.307 16:45:56 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:19.307 16:45:56 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:19.307 16:45:56 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:19.307 16:45:56 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:19.307 16:45:56 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:19.307 16:45:56 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:19.307 16:45:56 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:26:19.307 16:45:56 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:19.307 16:45:56 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:19.307 16:45:56 -- common/autotest_common.sh@10 -- # set +x 00:26:19.307 16:45:56 -- nvmf/common.sh@469 -- # nvmfpid=103656 00:26:19.307 16:45:56 -- nvmf/common.sh@470 -- # waitforlisten 103656 00:26:19.307 16:45:56 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:26:19.307 16:45:56 -- common/autotest_common.sh@829 -- # '[' -z 103656 ']' 00:26:19.307 16:45:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:19.307 16:45:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:19.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:19.307 16:45:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:19.307 16:45:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:19.307 16:45:56 -- common/autotest_common.sh@10 -- # set +x 00:26:19.566 [2024-11-16 16:45:56.843533] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:19.566 [2024-11-16 16:45:56.843630] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:19.566 [2024-11-16 16:45:56.985504] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:19.825 [2024-11-16 16:45:57.057856] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:19.825 [2024-11-16 16:45:57.058088] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:19.825 [2024-11-16 16:45:57.058110] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:19.825 [2024-11-16 16:45:57.058123] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:19.825 [2024-11-16 16:45:57.058237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:19.825 [2024-11-16 16:45:57.059221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:19.825 [2024-11-16 16:45:57.059332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:19.825 [2024-11-16 16:45:57.059346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:20.392 16:45:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:20.392 16:45:57 -- common/autotest_common.sh@862 -- # return 0 00:26:20.392 16:45:57 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:20.392 16:45:57 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:20.392 16:45:57 -- common/autotest_common.sh@10 -- # set +x 00:26:20.652 16:45:57 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:20.652 16:45:57 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:26:20.652 16:45:57 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:26:20.652 16:45:57 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:26:20.652 16:45:57 -- scripts/common.sh@311 -- # local bdf bdfs 00:26:20.652 16:45:57 -- scripts/common.sh@312 -- # local nvmes 00:26:20.652 16:45:57 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:26:20.652 16:45:57 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:26:20.652 16:45:57 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:26:20.652 16:45:57 -- scripts/common.sh@297 -- # local bdf= 00:26:20.652 16:45:57 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:26:20.652 16:45:57 -- scripts/common.sh@232 -- # local class 00:26:20.652 16:45:57 -- scripts/common.sh@233 -- # local subclass 00:26:20.652 16:45:57 -- scripts/common.sh@234 -- # local progif 00:26:20.652 16:45:57 -- scripts/common.sh@235 -- # printf %02x 1 00:26:20.652 16:45:57 -- scripts/common.sh@235 -- # class=01 00:26:20.652 16:45:57 -- scripts/common.sh@236 -- # printf %02x 8 00:26:20.652 16:45:57 -- scripts/common.sh@236 -- # subclass=08 00:26:20.652 16:45:57 -- scripts/common.sh@237 -- # printf %02x 2 00:26:20.652 16:45:57 -- scripts/common.sh@237 -- # progif=02 00:26:20.652 16:45:57 -- scripts/common.sh@239 -- # hash lspci 00:26:20.652 16:45:57 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:26:20.652 16:45:57 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:26:20.652 16:45:57 -- scripts/common.sh@242 -- # grep -i -- -p02 00:26:20.652 16:45:57 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:26:20.652 16:45:57 -- scripts/common.sh@244 -- # tr -d '"' 00:26:20.652 16:45:57 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:26:20.652 16:45:57 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:26:20.652 16:45:57 -- scripts/common.sh@15 -- # local i 00:26:20.652 16:45:57 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:26:20.652 16:45:57 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:26:20.652 16:45:57 -- scripts/common.sh@24 -- # return 0 00:26:20.652 16:45:57 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:26:20.652 16:45:57 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:26:20.652 16:45:57 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:26:20.652 16:45:57 -- scripts/common.sh@15 -- # local i 00:26:20.652 16:45:57 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:26:20.652 16:45:57 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:26:20.652 16:45:57 -- scripts/common.sh@24 -- # return 0 00:26:20.652 16:45:57 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:26:20.652 16:45:57 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:26:20.652 16:45:57 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:26:20.652 16:45:57 -- scripts/common.sh@322 -- # uname -s 00:26:20.652 16:45:57 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:26:20.652 16:45:57 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:26:20.652 16:45:57 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:26:20.652 16:45:57 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:26:20.652 16:45:57 -- scripts/common.sh@322 -- # uname -s 00:26:20.652 16:45:57 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:26:20.652 16:45:57 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:26:20.652 16:45:57 -- scripts/common.sh@327 -- # (( 2 )) 00:26:20.652 16:45:57 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:26:20.652 16:45:57 -- target/abort_qd_sizes.sh@79 -- # (( 2 > 0 )) 00:26:20.652 16:45:57 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:00:06.0 00:26:20.652 16:45:57 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:26:20.652 16:45:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:20.652 16:45:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:20.652 16:45:57 -- common/autotest_common.sh@10 -- # set +x 00:26:20.652 ************************************ 00:26:20.652 START TEST spdk_target_abort 00:26:20.652 ************************************ 00:26:20.652 16:45:57 -- common/autotest_common.sh@1114 -- # spdk_target 00:26:20.652 16:45:57 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:26:20.652 16:45:57 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:26:20.652 16:45:57 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:06.0 -b spdk_target 00:26:20.652 16:45:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.652 16:45:57 -- common/autotest_common.sh@10 -- # set +x 00:26:20.652 spdk_targetn1 00:26:20.652 16:45:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.652 16:45:58 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:20.652 16:45:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.652 16:45:58 -- common/autotest_common.sh@10 -- # set +x 00:26:20.652 [2024-11-16 16:45:58.052232] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:20.652 16:45:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.652 16:45:58 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:26:20.652 16:45:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.652 16:45:58 -- common/autotest_common.sh@10 -- # set +x 00:26:20.652 16:45:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.652 16:45:58 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:26:20.652 16:45:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.652 16:45:58 -- common/autotest_common.sh@10 -- # set +x 00:26:20.652 16:45:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.652 16:45:58 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:26:20.652 16:45:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.652 16:45:58 -- common/autotest_common.sh@10 -- # set +x 00:26:20.652 [2024-11-16 16:45:58.084462] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:20.652 16:45:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.652 16:45:58 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:26:20.652 16:45:58 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:26:20.652 16:45:58 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:26:20.652 16:45:58 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:26:20.652 16:45:58 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:26:20.652 16:45:58 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:26:20.652 16:45:58 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:26:20.652 16:45:58 -- target/abort_qd_sizes.sh@24 -- # local target r 00:26:20.652 16:45:58 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:26:20.652 16:45:58 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:20.652 16:45:58 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:26:20.652 16:45:58 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:20.652 16:45:58 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:26:20.652 16:45:58 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:20.652 16:45:58 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:26:20.652 16:45:58 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:20.652 16:45:58 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:20.652 16:45:58 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:20.652 16:45:58 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:20.652 16:45:58 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:20.652 16:45:58 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:23.939 Initializing NVMe Controllers 00:26:23.939 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:26:23.939 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:26:23.939 Initialization complete. Launching workers. 00:26:23.939 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 10427, failed: 0 00:26:23.939 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1178, failed to submit 9249 00:26:23.939 success 754, unsuccess 424, failed 0 00:26:23.939 16:46:01 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:23.939 16:46:01 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:27.228 Initializing NVMe Controllers 00:26:27.228 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:26:27.228 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:26:27.228 Initialization complete. Launching workers. 00:26:27.228 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 5920, failed: 0 00:26:27.228 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1242, failed to submit 4678 00:26:27.228 success 237, unsuccess 1005, failed 0 00:26:27.228 16:46:04 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:27.228 16:46:04 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:30.525 Initializing NVMe Controllers 00:26:30.525 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:26:30.525 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:26:30.525 Initialization complete. Launching workers. 00:26:30.525 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 30539, failed: 0 00:26:30.525 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2704, failed to submit 27835 00:26:30.525 success 359, unsuccess 2345, failed 0 00:26:30.525 16:46:07 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:26:30.525 16:46:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.525 16:46:07 -- common/autotest_common.sh@10 -- # set +x 00:26:30.525 16:46:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.525 16:46:07 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:26:30.525 16:46:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.525 16:46:07 -- common/autotest_common.sh@10 -- # set +x 00:26:30.837 16:46:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.837 16:46:08 -- target/abort_qd_sizes.sh@62 -- # killprocess 103656 00:26:30.837 16:46:08 -- common/autotest_common.sh@936 -- # '[' -z 103656 ']' 00:26:30.837 16:46:08 -- common/autotest_common.sh@940 -- # kill -0 103656 00:26:30.837 16:46:08 -- common/autotest_common.sh@941 -- # uname 00:26:30.837 16:46:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:30.837 16:46:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 103656 00:26:30.837 killing process with pid 103656 00:26:30.837 16:46:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:30.837 16:46:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:30.837 16:46:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 103656' 00:26:30.837 16:46:08 -- common/autotest_common.sh@955 -- # kill 103656 00:26:30.837 16:46:08 -- common/autotest_common.sh@960 -- # wait 103656 00:26:31.163 00:26:31.163 real 0m10.635s 00:26:31.163 user 0m43.539s 00:26:31.163 sys 0m1.841s 00:26:31.163 16:46:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:31.163 16:46:08 -- common/autotest_common.sh@10 -- # set +x 00:26:31.163 ************************************ 00:26:31.163 END TEST spdk_target_abort 00:26:31.163 ************************************ 00:26:31.449 16:46:08 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:26:31.449 16:46:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:31.449 16:46:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:31.449 16:46:08 -- common/autotest_common.sh@10 -- # set +x 00:26:31.449 ************************************ 00:26:31.449 START TEST kernel_target_abort 00:26:31.449 ************************************ 00:26:31.449 16:46:08 -- common/autotest_common.sh@1114 -- # kernel_target 00:26:31.449 16:46:08 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:26:31.449 16:46:08 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:26:31.449 16:46:08 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:26:31.449 16:46:08 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:26:31.449 16:46:08 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:26:31.449 16:46:08 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:26:31.449 16:46:08 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:31.449 16:46:08 -- nvmf/common.sh@627 -- # local block nvme 00:26:31.449 16:46:08 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:26:31.449 16:46:08 -- nvmf/common.sh@630 -- # modprobe nvmet 00:26:31.449 16:46:08 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:31.449 16:46:08 -- nvmf/common.sh@635 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:31.708 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:31.708 Waiting for block devices as requested 00:26:31.708 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:26:31.966 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:26:31.966 16:46:09 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:26:31.966 16:46:09 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:31.966 16:46:09 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:26:31.966 16:46:09 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:26:31.966 16:46:09 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:26:31.966 No valid GPT data, bailing 00:26:31.966 16:46:09 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:31.966 16:46:09 -- scripts/common.sh@393 -- # pt= 00:26:31.966 16:46:09 -- scripts/common.sh@394 -- # return 1 00:26:31.966 16:46:09 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:26:31.966 16:46:09 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:26:31.966 16:46:09 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n1 ]] 00:26:31.966 16:46:09 -- nvmf/common.sh@640 -- # block_in_use nvme1n1 00:26:31.966 16:46:09 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:26:31.966 16:46:09 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:26:31.966 No valid GPT data, bailing 00:26:31.966 16:46:09 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:26:31.966 16:46:09 -- scripts/common.sh@393 -- # pt= 00:26:31.966 16:46:09 -- scripts/common.sh@394 -- # return 1 00:26:31.966 16:46:09 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n1 00:26:31.966 16:46:09 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:26:31.966 16:46:09 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n2 ]] 00:26:31.966 16:46:09 -- nvmf/common.sh@640 -- # block_in_use nvme1n2 00:26:31.966 16:46:09 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:26:31.966 16:46:09 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:26:32.225 No valid GPT data, bailing 00:26:32.225 16:46:09 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:26:32.225 16:46:09 -- scripts/common.sh@393 -- # pt= 00:26:32.225 16:46:09 -- scripts/common.sh@394 -- # return 1 00:26:32.225 16:46:09 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n2 00:26:32.225 16:46:09 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:26:32.225 16:46:09 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n3 ]] 00:26:32.225 16:46:09 -- nvmf/common.sh@640 -- # block_in_use nvme1n3 00:26:32.225 16:46:09 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:26:32.225 16:46:09 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:26:32.225 No valid GPT data, bailing 00:26:32.225 16:46:09 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:26:32.225 16:46:09 -- scripts/common.sh@393 -- # pt= 00:26:32.225 16:46:09 -- scripts/common.sh@394 -- # return 1 00:26:32.225 16:46:09 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n3 00:26:32.225 16:46:09 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme1n3 ]] 00:26:32.225 16:46:09 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:26:32.225 16:46:09 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:26:32.225 16:46:09 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:32.225 16:46:09 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:26:32.225 16:46:09 -- nvmf/common.sh@654 -- # echo 1 00:26:32.225 16:46:09 -- nvmf/common.sh@655 -- # echo /dev/nvme1n3 00:26:32.225 16:46:09 -- nvmf/common.sh@656 -- # echo 1 00:26:32.225 16:46:09 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:26:32.225 16:46:09 -- nvmf/common.sh@663 -- # echo tcp 00:26:32.225 16:46:09 -- nvmf/common.sh@664 -- # echo 4420 00:26:32.225 16:46:09 -- nvmf/common.sh@665 -- # echo ipv4 00:26:32.225 16:46:09 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:32.225 16:46:09 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dcaf3c85-349e-474a-91c8-b5dfcb47b007 --hostid=dcaf3c85-349e-474a-91c8-b5dfcb47b007 -a 10.0.0.1 -t tcp -s 4420 00:26:32.225 00:26:32.225 Discovery Log Number of Records 2, Generation counter 2 00:26:32.225 =====Discovery Log Entry 0====== 00:26:32.225 trtype: tcp 00:26:32.225 adrfam: ipv4 00:26:32.225 subtype: current discovery subsystem 00:26:32.225 treq: not specified, sq flow control disable supported 00:26:32.225 portid: 1 00:26:32.225 trsvcid: 4420 00:26:32.225 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:32.225 traddr: 10.0.0.1 00:26:32.225 eflags: none 00:26:32.225 sectype: none 00:26:32.225 =====Discovery Log Entry 1====== 00:26:32.225 trtype: tcp 00:26:32.225 adrfam: ipv4 00:26:32.225 subtype: nvme subsystem 00:26:32.225 treq: not specified, sq flow control disable supported 00:26:32.225 portid: 1 00:26:32.225 trsvcid: 4420 00:26:32.225 subnqn: kernel_target 00:26:32.225 traddr: 10.0.0.1 00:26:32.225 eflags: none 00:26:32.225 sectype: none 00:26:32.225 16:46:09 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:26:32.225 16:46:09 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:26:32.225 16:46:09 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:26:32.225 16:46:09 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:26:32.225 16:46:09 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:26:32.226 16:46:09 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:26:32.226 16:46:09 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:26:32.226 16:46:09 -- target/abort_qd_sizes.sh@24 -- # local target r 00:26:32.226 16:46:09 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:26:32.226 16:46:09 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:32.226 16:46:09 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:26:32.226 16:46:09 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:32.226 16:46:09 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:26:32.226 16:46:09 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:32.226 16:46:09 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:26:32.226 16:46:09 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:32.226 16:46:09 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:26:32.226 16:46:09 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:32.226 16:46:09 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:26:32.226 16:46:09 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:32.226 16:46:09 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:26:35.517 Initializing NVMe Controllers 00:26:35.517 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:26:35.517 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:26:35.517 Initialization complete. Launching workers. 00:26:35.517 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 36698, failed: 0 00:26:35.517 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 36698, failed to submit 0 00:26:35.517 success 0, unsuccess 36698, failed 0 00:26:35.517 16:46:12 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:35.518 16:46:12 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:26:38.808 Initializing NVMe Controllers 00:26:38.808 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:26:38.808 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:26:38.808 Initialization complete. Launching workers. 00:26:38.808 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 87139, failed: 0 00:26:38.808 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 37287, failed to submit 49852 00:26:38.808 success 0, unsuccess 37287, failed 0 00:26:38.808 16:46:16 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:38.808 16:46:16 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:26:42.094 Initializing NVMe Controllers 00:26:42.094 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:26:42.094 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:26:42.094 Initialization complete. Launching workers. 00:26:42.094 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 106752, failed: 0 00:26:42.094 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 26680, failed to submit 80072 00:26:42.094 success 0, unsuccess 26680, failed 0 00:26:42.094 16:46:19 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:26:42.094 16:46:19 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:26:42.094 16:46:19 -- nvmf/common.sh@677 -- # echo 0 00:26:42.094 16:46:19 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:26:42.094 16:46:19 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:26:42.094 16:46:19 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:42.094 16:46:19 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:26:42.094 16:46:19 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:26:42.094 16:46:19 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:26:42.094 00:26:42.094 real 0m10.572s 00:26:42.094 user 0m5.717s 00:26:42.094 sys 0m2.097s 00:26:42.094 16:46:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:42.094 16:46:19 -- common/autotest_common.sh@10 -- # set +x 00:26:42.094 ************************************ 00:26:42.094 END TEST kernel_target_abort 00:26:42.094 ************************************ 00:26:42.094 16:46:19 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:26:42.094 16:46:19 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:26:42.094 16:46:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:42.094 16:46:19 -- nvmf/common.sh@116 -- # sync 00:26:42.094 16:46:19 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:42.094 16:46:19 -- nvmf/common.sh@119 -- # set +e 00:26:42.094 16:46:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:42.094 16:46:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:42.094 rmmod nvme_tcp 00:26:42.094 rmmod nvme_fabrics 00:26:42.094 rmmod nvme_keyring 00:26:42.094 16:46:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:42.094 16:46:19 -- nvmf/common.sh@123 -- # set -e 00:26:42.094 16:46:19 -- nvmf/common.sh@124 -- # return 0 00:26:42.094 16:46:19 -- nvmf/common.sh@477 -- # '[' -n 103656 ']' 00:26:42.094 16:46:19 -- nvmf/common.sh@478 -- # killprocess 103656 00:26:42.094 16:46:19 -- common/autotest_common.sh@936 -- # '[' -z 103656 ']' 00:26:42.094 16:46:19 -- common/autotest_common.sh@940 -- # kill -0 103656 00:26:42.094 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (103656) - No such process 00:26:42.094 Process with pid 103656 is not found 00:26:42.094 16:46:19 -- common/autotest_common.sh@963 -- # echo 'Process with pid 103656 is not found' 00:26:42.094 16:46:19 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:26:42.094 16:46:19 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:42.661 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:42.661 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:26:42.920 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:26:42.920 16:46:20 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:42.920 16:46:20 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:42.920 16:46:20 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:42.920 16:46:20 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:42.920 16:46:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:42.920 16:46:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:42.920 16:46:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:42.920 16:46:20 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:26:42.920 00:26:42.920 real 0m24.978s 00:26:42.920 user 0m50.832s 00:26:42.920 sys 0m5.340s 00:26:42.920 16:46:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:42.920 16:46:20 -- common/autotest_common.sh@10 -- # set +x 00:26:42.920 ************************************ 00:26:42.920 END TEST nvmf_abort_qd_sizes 00:26:42.920 ************************************ 00:26:42.920 16:46:20 -- spdk/autotest.sh@298 -- # '[' 0 -eq 1 ']' 00:26:42.920 16:46:20 -- spdk/autotest.sh@302 -- # '[' 0 -eq 1 ']' 00:26:42.920 16:46:20 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:26:42.920 16:46:20 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:26:42.920 16:46:20 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:26:42.920 16:46:20 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:26:42.920 16:46:20 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:26:42.920 16:46:20 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:26:42.920 16:46:20 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:26:42.920 16:46:20 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:26:42.920 16:46:20 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:26:42.920 16:46:20 -- spdk/autotest.sh@353 -- # [[ 0 -eq 1 ]] 00:26:42.920 16:46:20 -- spdk/autotest.sh@357 -- # [[ 0 -eq 1 ]] 00:26:42.920 16:46:20 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:26:42.920 16:46:20 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:26:42.920 16:46:20 -- spdk/autotest.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:26:42.920 16:46:20 -- spdk/autotest.sh@372 -- # timing_enter post_cleanup 00:26:42.920 16:46:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:42.920 16:46:20 -- common/autotest_common.sh@10 -- # set +x 00:26:42.920 16:46:20 -- spdk/autotest.sh@373 -- # autotest_cleanup 00:26:42.920 16:46:20 -- common/autotest_common.sh@1381 -- # local autotest_es=0 00:26:42.920 16:46:20 -- common/autotest_common.sh@1382 -- # xtrace_disable 00:26:42.920 16:46:20 -- common/autotest_common.sh@10 -- # set +x 00:26:44.824 INFO: APP EXITING 00:26:44.824 INFO: killing all VMs 00:26:44.824 INFO: killing vhost app 00:26:44.824 INFO: EXIT DONE 00:26:45.391 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:45.649 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:26:45.649 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:26:46.216 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:46.216 Cleaning 00:26:46.216 Removing: /var/run/dpdk/spdk0/config 00:26:46.216 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:26:46.216 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:26:46.216 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:26:46.216 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:26:46.216 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:26:46.216 Removing: /var/run/dpdk/spdk0/hugepage_info 00:26:46.216 Removing: /var/run/dpdk/spdk1/config 00:26:46.216 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:26:46.216 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:26:46.216 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:26:46.216 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:26:46.216 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:26:46.475 Removing: /var/run/dpdk/spdk1/hugepage_info 00:26:46.475 Removing: /var/run/dpdk/spdk2/config 00:26:46.475 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:26:46.475 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:26:46.475 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:26:46.475 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:26:46.475 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:26:46.475 Removing: /var/run/dpdk/spdk2/hugepage_info 00:26:46.475 Removing: /var/run/dpdk/spdk3/config 00:26:46.475 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:26:46.475 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:26:46.475 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:26:46.475 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:26:46.475 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:26:46.475 Removing: /var/run/dpdk/spdk3/hugepage_info 00:26:46.475 Removing: /var/run/dpdk/spdk4/config 00:26:46.475 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:26:46.475 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:26:46.475 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:26:46.475 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:26:46.475 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:26:46.475 Removing: /var/run/dpdk/spdk4/hugepage_info 00:26:46.475 Removing: /dev/shm/nvmf_trace.0 00:26:46.475 Removing: /dev/shm/spdk_tgt_trace.pid67592 00:26:46.475 Removing: /var/run/dpdk/spdk0 00:26:46.475 Removing: /var/run/dpdk/spdk1 00:26:46.475 Removing: /var/run/dpdk/spdk2 00:26:46.475 Removing: /var/run/dpdk/spdk3 00:26:46.475 Removing: /var/run/dpdk/spdk4 00:26:46.475 Removing: /var/run/dpdk/spdk_pid100621 00:26:46.475 Removing: /var/run/dpdk/spdk_pid100822 00:26:46.475 Removing: /var/run/dpdk/spdk_pid101111 00:26:46.476 Removing: /var/run/dpdk/spdk_pid101420 00:26:46.476 Removing: /var/run/dpdk/spdk_pid101982 00:26:46.476 Removing: /var/run/dpdk/spdk_pid101987 00:26:46.476 Removing: /var/run/dpdk/spdk_pid102361 00:26:46.476 Removing: /var/run/dpdk/spdk_pid102524 00:26:46.476 Removing: /var/run/dpdk/spdk_pid102681 00:26:46.476 Removing: /var/run/dpdk/spdk_pid102778 00:26:46.476 Removing: /var/run/dpdk/spdk_pid102938 00:26:46.476 Removing: /var/run/dpdk/spdk_pid103047 00:26:46.476 Removing: /var/run/dpdk/spdk_pid103725 00:26:46.476 Removing: /var/run/dpdk/spdk_pid103761 00:26:46.476 Removing: /var/run/dpdk/spdk_pid103796 00:26:46.476 Removing: /var/run/dpdk/spdk_pid104046 00:26:46.476 Removing: /var/run/dpdk/spdk_pid104082 00:26:46.476 Removing: /var/run/dpdk/spdk_pid104113 00:26:46.476 Removing: /var/run/dpdk/spdk_pid67440 00:26:46.476 Removing: /var/run/dpdk/spdk_pid67592 00:26:46.476 Removing: /var/run/dpdk/spdk_pid67919 00:26:46.476 Removing: /var/run/dpdk/spdk_pid68188 00:26:46.476 Removing: /var/run/dpdk/spdk_pid68371 00:26:46.476 Removing: /var/run/dpdk/spdk_pid68449 00:26:46.476 Removing: /var/run/dpdk/spdk_pid68548 00:26:46.476 Removing: /var/run/dpdk/spdk_pid68650 00:26:46.476 Removing: /var/run/dpdk/spdk_pid68683 00:26:46.476 Removing: /var/run/dpdk/spdk_pid68724 00:26:46.476 Removing: /var/run/dpdk/spdk_pid68787 00:26:46.476 Removing: /var/run/dpdk/spdk_pid68905 00:26:46.476 Removing: /var/run/dpdk/spdk_pid69537 00:26:46.476 Removing: /var/run/dpdk/spdk_pid69601 00:26:46.476 Removing: /var/run/dpdk/spdk_pid69670 00:26:46.476 Removing: /var/run/dpdk/spdk_pid69698 00:26:46.476 Removing: /var/run/dpdk/spdk_pid69777 00:26:46.476 Removing: /var/run/dpdk/spdk_pid69805 00:26:46.476 Removing: /var/run/dpdk/spdk_pid69890 00:26:46.476 Removing: /var/run/dpdk/spdk_pid69918 00:26:46.476 Removing: /var/run/dpdk/spdk_pid69969 00:26:46.476 Removing: /var/run/dpdk/spdk_pid70005 00:26:46.476 Removing: /var/run/dpdk/spdk_pid70051 00:26:46.476 Removing: /var/run/dpdk/spdk_pid70081 00:26:46.476 Removing: /var/run/dpdk/spdk_pid70240 00:26:46.476 Removing: /var/run/dpdk/spdk_pid70270 00:26:46.476 Removing: /var/run/dpdk/spdk_pid70349 00:26:46.476 Removing: /var/run/dpdk/spdk_pid70423 00:26:46.734 Removing: /var/run/dpdk/spdk_pid70453 00:26:46.734 Removing: /var/run/dpdk/spdk_pid70506 00:26:46.735 Removing: /var/run/dpdk/spdk_pid70531 00:26:46.735 Removing: /var/run/dpdk/spdk_pid70560 00:26:46.735 Removing: /var/run/dpdk/spdk_pid70585 00:26:46.735 Removing: /var/run/dpdk/spdk_pid70614 00:26:46.735 Removing: /var/run/dpdk/spdk_pid70639 00:26:46.735 Removing: /var/run/dpdk/spdk_pid70668 00:26:46.735 Removing: /var/run/dpdk/spdk_pid70695 00:26:46.735 Removing: /var/run/dpdk/spdk_pid70724 00:26:46.735 Removing: /var/run/dpdk/spdk_pid70749 00:26:46.735 Removing: /var/run/dpdk/spdk_pid70778 00:26:46.735 Removing: /var/run/dpdk/spdk_pid70798 00:26:46.735 Removing: /var/run/dpdk/spdk_pid70832 00:26:46.735 Removing: /var/run/dpdk/spdk_pid70853 00:26:46.735 Removing: /var/run/dpdk/spdk_pid70886 00:26:46.735 Removing: /var/run/dpdk/spdk_pid70901 00:26:46.735 Removing: /var/run/dpdk/spdk_pid70936 00:26:46.735 Removing: /var/run/dpdk/spdk_pid70955 00:26:46.735 Removing: /var/run/dpdk/spdk_pid70990 00:26:46.735 Removing: /var/run/dpdk/spdk_pid71004 00:26:46.735 Removing: /var/run/dpdk/spdk_pid71044 00:26:46.735 Removing: /var/run/dpdk/spdk_pid71058 00:26:46.735 Removing: /var/run/dpdk/spdk_pid71087 00:26:46.735 Removing: /var/run/dpdk/spdk_pid71112 00:26:46.735 Removing: /var/run/dpdk/spdk_pid71141 00:26:46.735 Removing: /var/run/dpdk/spdk_pid71155 00:26:46.735 Removing: /var/run/dpdk/spdk_pid71195 00:26:46.735 Removing: /var/run/dpdk/spdk_pid71209 00:26:46.735 Removing: /var/run/dpdk/spdk_pid71249 00:26:46.735 Removing: /var/run/dpdk/spdk_pid71266 00:26:46.735 Removing: /var/run/dpdk/spdk_pid71295 00:26:46.735 Removing: /var/run/dpdk/spdk_pid71320 00:26:46.735 Removing: /var/run/dpdk/spdk_pid71349 00:26:46.735 Removing: /var/run/dpdk/spdk_pid71372 00:26:46.735 Removing: /var/run/dpdk/spdk_pid71409 00:26:46.735 Removing: /var/run/dpdk/spdk_pid71432 00:26:46.735 Removing: /var/run/dpdk/spdk_pid71469 00:26:46.735 Removing: /var/run/dpdk/spdk_pid71489 00:26:46.735 Removing: /var/run/dpdk/spdk_pid71523 00:26:46.735 Removing: /var/run/dpdk/spdk_pid71543 00:26:46.735 Removing: /var/run/dpdk/spdk_pid71584 00:26:46.735 Removing: /var/run/dpdk/spdk_pid71655 00:26:46.735 Removing: /var/run/dpdk/spdk_pid71760 00:26:46.735 Removing: /var/run/dpdk/spdk_pid72197 00:26:46.735 Removing: /var/run/dpdk/spdk_pid79180 00:26:46.735 Removing: /var/run/dpdk/spdk_pid79530 00:26:46.735 Removing: /var/run/dpdk/spdk_pid81965 00:26:46.735 Removing: /var/run/dpdk/spdk_pid82353 00:26:46.735 Removing: /var/run/dpdk/spdk_pid82622 00:26:46.735 Removing: /var/run/dpdk/spdk_pid82668 00:26:46.735 Removing: /var/run/dpdk/spdk_pid82985 00:26:46.735 Removing: /var/run/dpdk/spdk_pid83035 00:26:46.735 Removing: /var/run/dpdk/spdk_pid83425 00:26:46.735 Removing: /var/run/dpdk/spdk_pid83954 00:26:46.735 Removing: /var/run/dpdk/spdk_pid84391 00:26:46.735 Removing: /var/run/dpdk/spdk_pid85381 00:26:46.735 Removing: /var/run/dpdk/spdk_pid86372 00:26:46.735 Removing: /var/run/dpdk/spdk_pid86484 00:26:46.735 Removing: /var/run/dpdk/spdk_pid86552 00:26:46.735 Removing: /var/run/dpdk/spdk_pid88032 00:26:46.735 Removing: /var/run/dpdk/spdk_pid88279 00:26:46.735 Removing: /var/run/dpdk/spdk_pid88713 00:26:46.735 Removing: /var/run/dpdk/spdk_pid88825 00:26:46.735 Removing: /var/run/dpdk/spdk_pid88978 00:26:46.735 Removing: /var/run/dpdk/spdk_pid89018 00:26:46.735 Removing: /var/run/dpdk/spdk_pid89062 00:26:46.735 Removing: /var/run/dpdk/spdk_pid89109 00:26:46.735 Removing: /var/run/dpdk/spdk_pid89271 00:26:46.735 Removing: /var/run/dpdk/spdk_pid89425 00:26:46.735 Removing: /var/run/dpdk/spdk_pid89690 00:26:46.735 Removing: /var/run/dpdk/spdk_pid89813 00:26:46.735 Removing: /var/run/dpdk/spdk_pid90241 00:26:46.735 Removing: /var/run/dpdk/spdk_pid90634 00:26:46.735 Removing: /var/run/dpdk/spdk_pid90640 00:26:46.735 Removing: /var/run/dpdk/spdk_pid92897 00:26:46.735 Removing: /var/run/dpdk/spdk_pid93210 00:26:46.735 Removing: /var/run/dpdk/spdk_pid93735 00:26:46.994 Removing: /var/run/dpdk/spdk_pid93737 00:26:46.994 Removing: /var/run/dpdk/spdk_pid94084 00:26:46.994 Removing: /var/run/dpdk/spdk_pid94098 00:26:46.994 Removing: /var/run/dpdk/spdk_pid94112 00:26:46.994 Removing: /var/run/dpdk/spdk_pid94143 00:26:46.994 Removing: /var/run/dpdk/spdk_pid94154 00:26:46.994 Removing: /var/run/dpdk/spdk_pid94296 00:26:46.994 Removing: /var/run/dpdk/spdk_pid94299 00:26:46.994 Removing: /var/run/dpdk/spdk_pid94411 00:26:46.994 Removing: /var/run/dpdk/spdk_pid94414 00:26:46.994 Removing: /var/run/dpdk/spdk_pid94522 00:26:46.994 Removing: /var/run/dpdk/spdk_pid94524 00:26:46.994 Removing: /var/run/dpdk/spdk_pid94994 00:26:46.994 Removing: /var/run/dpdk/spdk_pid95038 00:26:46.994 Removing: /var/run/dpdk/spdk_pid95195 00:26:46.994 Removing: /var/run/dpdk/spdk_pid95311 00:26:46.994 Removing: /var/run/dpdk/spdk_pid95711 00:26:46.994 Removing: /var/run/dpdk/spdk_pid95967 00:26:46.994 Removing: /var/run/dpdk/spdk_pid96465 00:26:46.994 Removing: /var/run/dpdk/spdk_pid97021 00:26:46.994 Removing: /var/run/dpdk/spdk_pid97517 00:26:46.994 Removing: /var/run/dpdk/spdk_pid97607 00:26:46.994 Removing: /var/run/dpdk/spdk_pid97692 00:26:46.994 Removing: /var/run/dpdk/spdk_pid97782 00:26:46.994 Removing: /var/run/dpdk/spdk_pid97941 00:26:46.994 Removing: /var/run/dpdk/spdk_pid98031 00:26:46.994 Removing: /var/run/dpdk/spdk_pid98116 00:26:46.994 Removing: /var/run/dpdk/spdk_pid98205 00:26:46.994 Removing: /var/run/dpdk/spdk_pid98541 00:26:46.994 Removing: /var/run/dpdk/spdk_pid99256 00:26:46.994 Clean 00:26:46.994 killing process with pid 61834 00:26:46.994 killing process with pid 61835 00:26:46.994 16:46:24 -- common/autotest_common.sh@1446 -- # return 0 00:26:46.994 16:46:24 -- spdk/autotest.sh@374 -- # timing_exit post_cleanup 00:26:46.994 16:46:24 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:46.994 16:46:24 -- common/autotest_common.sh@10 -- # set +x 00:26:47.253 16:46:24 -- spdk/autotest.sh@376 -- # timing_exit autotest 00:26:47.253 16:46:24 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:47.253 16:46:24 -- common/autotest_common.sh@10 -- # set +x 00:26:47.253 16:46:24 -- spdk/autotest.sh@377 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:26:47.253 16:46:24 -- spdk/autotest.sh@379 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:26:47.253 16:46:24 -- spdk/autotest.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:26:47.253 16:46:24 -- spdk/autotest.sh@381 -- # [[ y == y ]] 00:26:47.253 16:46:24 -- spdk/autotest.sh@383 -- # hostname 00:26:47.253 16:46:24 -- spdk/autotest.sh@383 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:26:47.511 geninfo: WARNING: invalid characters removed from testname! 00:27:09.436 16:46:44 -- spdk/autotest.sh@384 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:10.004 16:46:47 -- spdk/autotest.sh@385 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:11.907 16:46:49 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:14.439 16:46:51 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:16.342 16:46:53 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:18.245 16:46:55 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:20.778 16:46:57 -- spdk/autotest.sh@393 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:27:20.778 16:46:58 -- common/autotest_common.sh@1689 -- $ [[ y == y ]] 00:27:20.778 16:46:58 -- common/autotest_common.sh@1690 -- $ lcov --version 00:27:20.778 16:46:58 -- common/autotest_common.sh@1690 -- $ awk '{print $NF}' 00:27:20.778 16:46:58 -- common/autotest_common.sh@1690 -- $ lt 1.15 2 00:27:20.778 16:46:58 -- scripts/common.sh@372 -- $ cmp_versions 1.15 '<' 2 00:27:20.778 16:46:58 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:27:20.778 16:46:58 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:27:20.778 16:46:58 -- scripts/common.sh@335 -- $ IFS=.-: 00:27:20.778 16:46:58 -- scripts/common.sh@335 -- $ read -ra ver1 00:27:20.778 16:46:58 -- scripts/common.sh@336 -- $ IFS=.-: 00:27:20.778 16:46:58 -- scripts/common.sh@336 -- $ read -ra ver2 00:27:20.778 16:46:58 -- scripts/common.sh@337 -- $ local 'op=<' 00:27:20.778 16:46:58 -- scripts/common.sh@339 -- $ ver1_l=2 00:27:20.778 16:46:58 -- scripts/common.sh@340 -- $ ver2_l=1 00:27:20.778 16:46:58 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:27:20.778 16:46:58 -- scripts/common.sh@343 -- $ case "$op" in 00:27:20.778 16:46:58 -- scripts/common.sh@344 -- $ : 1 00:27:20.778 16:46:58 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:27:20.778 16:46:58 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:20.778 16:46:58 -- scripts/common.sh@364 -- $ decimal 1 00:27:20.778 16:46:58 -- scripts/common.sh@352 -- $ local d=1 00:27:20.778 16:46:58 -- scripts/common.sh@353 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:27:20.778 16:46:58 -- scripts/common.sh@354 -- $ echo 1 00:27:20.778 16:46:58 -- scripts/common.sh@364 -- $ ver1[v]=1 00:27:20.778 16:46:58 -- scripts/common.sh@365 -- $ decimal 2 00:27:20.778 16:46:58 -- scripts/common.sh@352 -- $ local d=2 00:27:20.778 16:46:58 -- scripts/common.sh@353 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:27:20.778 16:46:58 -- scripts/common.sh@354 -- $ echo 2 00:27:20.778 16:46:58 -- scripts/common.sh@365 -- $ ver2[v]=2 00:27:20.778 16:46:58 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:27:20.778 16:46:58 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:27:20.778 16:46:58 -- scripts/common.sh@367 -- $ return 0 00:27:20.778 16:46:58 -- common/autotest_common.sh@1691 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:20.778 16:46:58 -- common/autotest_common.sh@1703 -- $ export 'LCOV_OPTS= 00:27:20.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:20.778 --rc genhtml_branch_coverage=1 00:27:20.778 --rc genhtml_function_coverage=1 00:27:20.778 --rc genhtml_legend=1 00:27:20.778 --rc geninfo_all_blocks=1 00:27:20.778 --rc geninfo_unexecuted_blocks=1 00:27:20.778 00:27:20.778 ' 00:27:20.778 16:46:58 -- common/autotest_common.sh@1703 -- $ LCOV_OPTS=' 00:27:20.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:20.778 --rc genhtml_branch_coverage=1 00:27:20.778 --rc genhtml_function_coverage=1 00:27:20.778 --rc genhtml_legend=1 00:27:20.778 --rc geninfo_all_blocks=1 00:27:20.778 --rc geninfo_unexecuted_blocks=1 00:27:20.778 00:27:20.778 ' 00:27:20.778 16:46:58 -- common/autotest_common.sh@1704 -- $ export 'LCOV=lcov 00:27:20.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:20.778 --rc genhtml_branch_coverage=1 00:27:20.778 --rc genhtml_function_coverage=1 00:27:20.778 --rc genhtml_legend=1 00:27:20.778 --rc geninfo_all_blocks=1 00:27:20.778 --rc geninfo_unexecuted_blocks=1 00:27:20.778 00:27:20.778 ' 00:27:20.778 16:46:58 -- common/autotest_common.sh@1704 -- $ LCOV='lcov 00:27:20.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:20.778 --rc genhtml_branch_coverage=1 00:27:20.778 --rc genhtml_function_coverage=1 00:27:20.778 --rc genhtml_legend=1 00:27:20.778 --rc geninfo_all_blocks=1 00:27:20.778 --rc geninfo_unexecuted_blocks=1 00:27:20.778 00:27:20.778 ' 00:27:20.778 16:46:58 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:20.778 16:46:58 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:27:20.778 16:46:58 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:20.778 16:46:58 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:20.778 16:46:58 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.778 16:46:58 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.778 16:46:58 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.778 16:46:58 -- paths/export.sh@5 -- $ export PATH 00:27:20.778 16:46:58 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.778 16:46:58 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:27:20.778 16:46:58 -- common/autobuild_common.sh@440 -- $ date +%s 00:27:20.778 16:46:58 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1731775618.XXXXXX 00:27:20.778 16:46:58 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1731775618.gSEELY 00:27:20.778 16:46:58 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:27:20.778 16:46:58 -- common/autobuild_common.sh@446 -- $ '[' -n v23.11 ']' 00:27:20.778 16:46:58 -- common/autobuild_common.sh@447 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:27:20.778 16:46:58 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:27:20.778 16:46:58 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:27:20.778 16:46:58 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:27:20.778 16:46:58 -- common/autobuild_common.sh@456 -- $ get_config_params 00:27:20.778 16:46:58 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:27:20.778 16:46:58 -- common/autotest_common.sh@10 -- $ set +x 00:27:20.778 16:46:58 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang' 00:27:20.778 16:46:58 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:27:20.778 16:46:58 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:27:20.778 16:46:58 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:27:20.778 16:46:58 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:27:20.778 16:46:58 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:27:20.778 16:46:58 -- spdk/autopackage.sh@19 -- $ timing_finish 00:27:20.778 16:46:58 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:27:20.778 16:46:58 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:27:20.778 16:46:58 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:20.778 16:46:58 -- spdk/autopackage.sh@20 -- $ exit 0 00:27:20.778 + [[ -n 5965 ]] 00:27:20.778 + sudo kill 5965 00:27:21.077 [Pipeline] } 00:27:21.093 [Pipeline] // timeout 00:27:21.098 [Pipeline] } 00:27:21.112 [Pipeline] // stage 00:27:21.118 [Pipeline] } 00:27:21.132 [Pipeline] // catchError 00:27:21.140 [Pipeline] stage 00:27:21.142 [Pipeline] { (Stop VM) 00:27:21.154 [Pipeline] sh 00:27:21.434 + vagrant halt 00:27:24.765 ==> default: Halting domain... 00:27:31.338 [Pipeline] sh 00:27:31.615 + vagrant destroy -f 00:27:34.146 ==> default: Removing domain... 00:27:34.160 [Pipeline] sh 00:27:34.440 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest/output 00:27:34.450 [Pipeline] } 00:27:34.466 [Pipeline] // stage 00:27:34.472 [Pipeline] } 00:27:34.486 [Pipeline] // dir 00:27:34.491 [Pipeline] } 00:27:34.505 [Pipeline] // wrap 00:27:34.513 [Pipeline] } 00:27:34.525 [Pipeline] // catchError 00:27:34.535 [Pipeline] stage 00:27:34.537 [Pipeline] { (Epilogue) 00:27:34.550 [Pipeline] sh 00:27:34.833 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:27:40.114 [Pipeline] catchError 00:27:40.116 [Pipeline] { 00:27:40.128 [Pipeline] sh 00:27:40.409 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:27:40.668 Artifacts sizes are good 00:27:40.677 [Pipeline] } 00:27:40.692 [Pipeline] // catchError 00:27:40.706 [Pipeline] archiveArtifacts 00:27:40.714 Archiving artifacts 00:27:40.830 [Pipeline] cleanWs 00:27:40.841 [WS-CLEANUP] Deleting project workspace... 00:27:40.841 [WS-CLEANUP] Deferred wipeout is used... 00:27:40.848 [WS-CLEANUP] done 00:27:40.850 [Pipeline] } 00:27:40.865 [Pipeline] // stage 00:27:40.871 [Pipeline] } 00:27:40.884 [Pipeline] // node 00:27:40.889 [Pipeline] End of Pipeline 00:27:40.924 Finished: SUCCESS